Prosecution Insights
Last updated: April 19, 2026
Application No. 18/384,348

REFACTORING NON-COMPLIANT CODE INTO COMPLIANT CODE

Non-Final OA §101§103§112
Filed
Oct 26, 2023
Examiner
SLACHTA, DOUGLAS M
Art Unit
2193
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
279 granted / 340 resolved
+27.1% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
20 currently pending
Career history
360
Total Applications
across all art units

Statute-Specific Performance

§101
21.2%
-18.8% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 340 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to communication filed 10/26/2023. Claims 1-20 are currently pending and claims 1, 11, and 18 are the independent claims. Claim Objections Claims 1, 7, 11-13, and 15-19 are objected to because of the following informalities: As per claims 1, 11-13, 15, 17, and 18-19, they recite “proposed rewritten version of the code snippet”, however the independent claims 1, 11, and 18 recite “…wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code…displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet”, which recites that the “proposed rewritten version of the code snippet” is “output of the LLM”, however the LLM is previously recited/instructed/etc. as refactoring the code snippet into “modified code” not “rewritten code”, and as such for consistency/clarity the output code of the LLM should be “modified” not “rewritten”. For clarity/consistency the examiner would like to recommend the wording/phrasing “…wherein the LLM prompt instructs the LLM to refactor the code snippet into a modified version of the code snippet…displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed modified version of the code snippet” for the independent claims 1, 11, and 18; for dependent claims 3 and 4 to recite “the modified version of the code snippet” rather than “the modified code”; and for the dependent claims 12-13, 15, 17, and 19 to recite “the proposed modified version of the code snippet”. As per claim 7, it recites “The method of claim 1, wherein the context includes function information for a function associated with the code snippet or class information for a class associated with the code snippet or file information for a file that is called by the code snippet.” For clarity/grammar/etc., the examiner would like to recommend using a comma (“,”) to separate the listed options which the context may include, such that the wording/phrasing of the claim is “wherein the context includes function information for a function associated with the code snippet, class information for a class associated with the code snippet, or file information for a file that is called by the code snippet.” As per claim 16, it recites “…a previous version of code, which is uncompliant with certain policy, is refactored into a new version of code, which is compliant with the certain policy” when for grammar/clarity it should recite “…a previous version of code, which is uncompliant with a certain policy, is refactored…”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per independent claim 1, it recites “A method for intelligently prompting a large language model (LLM) to refactor code…”. Examiner is unclear as to what is meant by “intelligently” prompting a LLM as the examiner understands “intelligent” to mean a high/satisfactory degree of mental capacity/good judgement/sound thought/etc. (see definition below), and as such different persons of ordinary skill in the art may have different opinions as to what would be considered an “intelligent” manner of prompting as they may have different opinions as to what would be considered “good” judgement/a “high” degree of intelligence or mental capacity/etc. For the purpose of examination the examiner will consider these limitations to be “A method for prompting a large language model (LLM) to refactor code…”. As per dependent claims 2-10, they incorporate the deficiencies of claim 1, upon which they depend, and fail to correct the deficiencies of claim 1. Therefore, claims 2-10 are rejected for similar reasoning as claim 1, above. https://www.merriam-webster.com/dictionary/intelligent Intelligent adjective a: having or indicating a high or satisfactory degree of intelligence and mental capacity b: revealing or reflecting good judgment or sound thought : skillful As per independent claims 1, 11, and 18, and dependent claim 9, they recite “accessing a code snippet, which is identified as potentially comprising a reference to an out-of-compliance library/ which is identified as potentially comprising code that is uncompliant with a policy/which is identified as potentially comprising at least one of (i) a reference to an out-of-compliance library or (ii) code that is uncompliant with a policy” and “wherein accessing the code snippet, which is identified as potentially comprising the reference to the out-of-compliance library, includes…”. The examiner is unclear as to what is meant by “potentially comprising” as the examiner understands “potentially” to mean possibly/may or may not/may at some point/etc. (see definition below), and as such the examiner is unclear as to what criteria/judgement/etc. would be used to determine a code potentially/may or may not/possibly/could at some point/etc. comprise something as different persons of ordinary skill in the art may have different opinions/criteria/etc. to judge code as “potentially” comprising (ex: does the code need to have the something now, could any code be determined to “potentially” have the something as code may be modified in the future, is there a threshold probability that the code comes to contain the something in the future, etc.), and as such the examiner is unclear as to what is meant by “a code snippet, which is identified as potentially comprising”. For the purpose of examination, the examiner will consider these limitations to be “accessing a code snippet, which is identified as comprising…”. As per dependent claims 2-10, 12-17, and 19-20, they incorporate the deficiencies of claims 1, 11, and 18, upon which they depend, and fail to correct the deficiencies of claims 1, 11, and 18. Therefore claims 2-10, 12-17, and 19-20 are rejected for similar reasoning as claims 1, 11, and 18, above. https://www.merriam-webster.com/dictionary/potentially Potentially adverb : in a potential or possible state or condition —used to describe the possible results or effects of something As per claim 3, it further recites “The method of claim 2, wherein the library that is determined to be compliant is the same as the compliant library that is called by the modified code.” Examiner would like to point out that claim 3 depends on claim 2 which depends on claim 1, however, while claim 1 recites that a code snippet “is identified as potentially comprising a reference to an out-of-compliance library” and that an LLM is instructed “to refactor the code snippet into modified code”, neither claim 2 or claim 1 previously recite/clarify that the modified code “calls” a compliant library, and as such there is insufficient antecedent basis for these limitations in the claims. For the purpose of examination, the examiner will consider these limitations to be “…wherein the library that is determined to be compliant is called by the modified code.” As per claim 4, it further recites “The method of claim 2, wherein the library that is determined to be compliant is different than the compliant library that is called by the modified code.” Examiner would like to point out that claim 4 depends on claim 2 which depends on claim 1, however, while claim 1 recites that a code snippet “is identified as potentially comprising a reference to an out-of-compliance library” and that an LLM is instructed “to refactor the code snippet into modified code”, neither claim 2 or claim 1 previously recite/clarify that the modified code “calls” a compliant library, and as such there is insufficient antecedent basis for these limitations in the claims. For the purpose of examination, the examiner will consider these limitations to be “…wherein the library that is determined to be compliant is different than a compliant library that is called by the modified code.” As per claim 14 it further recites “…wherein the output is displayed proximately to the code snippet.” Examiner is unclear as to what is meant by “proximately to” the code snippet as the examiner understands “proximate” to mean “close”, “near”, etc. (see definition below) and different persons of ordinary skill in the art may have different opinions as to what would be considered “close”, “near”, etc.. For the purpose of examination, the examiner will consider these limitations to be “…wherein the output is displayed with the code snippet.” https://www.merriam-webster.com/dictionary/proximate Proximate adjective a: very near : close b: soon forthcoming : imminent Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As per independent claim 1, it recites “A method for intelligently prompting a large language model (LLM) to refactor code, said method comprising: accessing a code snippet, which is identified as potentially comprising a reference to an out-of-compliance library; generating context for the code snippet; building an LLM prompt that will be fed to the LLM, wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code, which calls a compliant library; and displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet.” The limitations “…a code snippet, which is identified as potentially comprising a reference to an out-of-compliance library”, “generating context for the code snippet”, and “building an LLM prompt that will be fed to the LLM”, as drafted, recites a function that, under its broadest reasonable interpretation, covers a function that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components, and as such, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the functions through observation, evaluation, judgment, and/or opinion, or even with the aid of pen and paper. For example, a human may mentally/manually/with pen and paper/etc. judge/decide/identify/determine/etc. references of code/that a code snippet references an out-of-compliance library/etc., may mentally/with pen and paper/etc. judge/determine/decide/observe/write/generate/etc. context/information about/etc. the code snippet, and may mentally/with pen and paper/etc. decide/judge/determine/write/build/etc. a prompt/input/natural language communication/etc. that will be fed/input/provided/etc. to an language model/LLM. As such, with broadest reasonable interpretation, a human/human mind may mentally/with pen and paper/etc. carryout the functions through observation, evaluation, judgment, and/or opinion, or even with the aid of pen and paper, and therefore this limitation recites and falls within the “Mental Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application. The claim recites the additional elements “accessing a code snippet”, “wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code, which calls a compliant library”, and “displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet.” The additional elements “accessing a code snippet” and “displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet” do nothing more than add insignificant extra solution activities to the judicial exception of merely gathering/accessing/etc. and displaying/outputting/etc. data/output/information/etc., and the courts have identified functions such as gathering, displaying, updating, transmitting and storing data as well-understood, routine, conventional activity (see MPEP 2106.05(d)). Further, the additional element “wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code, which calls a compliant library” recites that a high level/generic computer component/LLM is used to perform/implement/apply the abstract idea/mental process and an insignificant extra solution activity of updating/modifying/refactoring data/code snippet/information/etc., and as such amounts to mere instructions to apply/implement the abstract idea/mental process and perform an insignificant extra solution activity/gathering data/etc. using high level/generic computer components, and the courts have identified functions such as gathering, displaying, updating, transmitting and storing data as well-understood, routine, conventional activity (see MPEP 2106.05(d)). Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f), 2106.05(g), etc.. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to insignificant extra solution activities of gather/accessing data/information, displaying data/information, and updating/modifying/etc. data information, and performing/implementing/etc. the abstract idea/mental process and the insignificant extra solution activities using high level/generic computer components, and performing/implementing the abstract idea/mental process is not significantly more than the abstract idea/mental process and the courts have identified functions such as gathering, displaying, updating, transmitting and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). Accordingly, the claims are not patent eligible under 35 USC 101. As per claim 2, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein the method further includes generating one or more mappings that map the out-of-compliance library to a library that is determined to be compliant, and wherein the prompt is structured to include the one or more mappings” which, conceptually, with broadest reasonable interpretation, provides further clarification of the abstract idea/mental process/determining/deciding/judging/generating mappings and including the mappings in a determined/decided/etc. prompt, which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea. As such, claim 2 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 3, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein the library that is determined to be compliant is the same as the compliant library that is called by the modified code” which, conceptually, with broadest reasonable interpretation, provides further clarification of the abstract idea/mental process/determining/deciding/judging/etc., which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea. As such, claim 3 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 4, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein the library that is determined to be compliant is different than the compliant library that is called by the modified code” which, conceptually, with broadest reasonable interpretation, provides further clarification of the abstract idea/mental process/determining/deciding/judging/etc., which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea. As such, claim 4 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 5, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein the context includes content obtained from a selected number of lines of code preceding the code snippet” which, conceptually, with broadest reasonable interpretation, provides further clarification of the abstract idea/mental process, and further insignificant extra solution activity of gathering/obtaining data/information/obtaining content/etc. which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea, and the courts have identified functions such as gathering, displaying, updating, transmitting and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 5 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 6, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein the context includes content obtained from a selected number of lines of code succeeding the code snippet” which, conceptually, with broadest reasonable interpretation, provides further clarification of the abstract idea/mental process, and further insignificant extra solution activity of gathering/obtaining data/information/obtaining content/etc. which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea, and the courts have identified functions such as gathering, displaying, updating, transmitting and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 6 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 7, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein the context includes function information for a function associated with the code snippet or class information for a class associated with the code snippet or file information for a file that is called by the code snippet” which, conceptually, with broadest reasonable interpretation, provides further clarification of the abstract idea/mental process, which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea. As such, claim 7 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 8, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein the LLM is a generative pre-trained transformer type of LLM” which, conceptually, with broadest reasonable interpretation, provides further clarification of the computer components used in performing/implementing/applying/etc. the insignificant extra solution activities and the abstract idea/mental process, which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea. As such, claim 8 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 9, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein accessing the code snippet, which is identified as potentially comprising the reference to the out-of-compliance library, includes: parsing a codebase comprising the code snippet, resulting in generation of parsed data; inserting the parsed data into a dependency graph, which includes version data for a set of libraries; and determining, based on the dependency graph, that the code snippet does include the reference to the out-of-compliance library” which recites the further abstract idea/mental process of evaluating/analyzing/judging/parsing data/information/codebase, deciding/judging/generating/writing/etc. a parsed/analyzed/judged/etc. codebase/data/information/etc., and judging/observing/determining/deciding/etc. that code references a library, which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea/mental process; and further recites an insignificant extra solution activity of updating/modifying/etc. data/information/inserting parsed data into a dependency graph/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 9 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 10, it incorporates the deficiencies of claim 1, upon which it depends, and further recites “…wherein parsing the codebase includes parsing a manifest associated with the codebase, parsing a lockfile associated with the codebase, or parsing metadata associated with the codebase” which, conceptually, with broadest reasonable interpretation, provides further clarification of the abstract idea/mental process, which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea. As such, claim 10 fails to correct the deficiencies of claim 1, and is therefore rejected for similar reasoning as claim 1, above. As per claim 11, it recites a computer system having similar limitations as the method of claim 1 and as such recites a similar abstract idea/mental process and has similar deficiencies as claim 1, and is therefore rejected for similar reasoning as claim 1, above. Claim 11 recites the further additional elements/limitations “A computer system comprising: a processor system; and a storage system that includes instructions that are executable by the processor system to cause the computer system to…” which, conceptually, with broadest reasonable interpretation, recites that high level/generic computer/computer components/computer system comprising a processor system and a storage system/etc. are used to implement/perform the abstract idea/mental process, and as such amounts to mere instructions to apply the judicial exception/abstract idea using high level/generic computer components, which does not integrate the abstract idea into a practical application and is not significantly more than the abstract idea/mental process. Accordingly, the additional elements/limitations of claim 11 fail to correct the deficiencies of claim 1, and therefore claim 11 is rejected for similar reasoning as claim 1, above. As per claim 12, it incorporates the deficiencies of claim 11, upon which it depends, and further recites “…wherein the output further includes a selectable option to accept the output into a codebase comprising the code snippet or, alternatively, to reject the output such that the proposed rewritten version of the code snippet is prevented from being included in the codebase” which, conceptually, with broadest reasonable interpretation, recites an insignificant extra solution activity of updating/storing data/information/accepting or rejecting output/code into a codebase/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting, and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 12 fails to correct the deficiencies of claim 11, and is therefore rejected for similar reasoning as claim 11, above. As per claim 13, it incorporates the deficiencies of claim 11, upon which it depends, and further recites “…wherein the proposed rewritten version of the code snippet is automatically incorporated into a codebase such that the proposed rewritten version of the code snippet replaces the code snippet in the codebase” which, conceptually, with broadest reasonable interpretation, recites an insignificant extra solution activity of updating/storing data/information/incorporating output into a codebase/replacing code in a codebase/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting, and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 13 fails to correct the deficiencies of claim 11, and is therefore rejected for similar reasoning as claim 11, above. As per claim 14, it incorporates the deficiencies of claim 11, upon which it depends, and further recites “…wherein the output is displayed proximately to the code snippet” which, conceptually, with broadest reasonable interpretation, recites an insignificant extra solution activity of displaying data/information/output/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting, and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 14 fails to correct the deficiencies of claim 11, and is therefore rejected for similar reasoning as claim 11, above. As per claim 15, it incorporates the deficiencies of claim 11, upon which it depends, and further recites “…wherein the proposed rewritten version of the code snippet is refactored code, which functions the same as the code snippet despite having different syntax and which is now compliant with the policy” which, conceptually, with broadest reasonable interpretation, provides further clarification as to the insignificant extra solution activity of updating/modifying/etc. data/information/code/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting, and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 15 fails to correct the deficiencies of claim 11, and is therefore rejected for similar reasoning as claim 11, above. As per claim 16, it incorporates the deficiencies of claim 11, upon which it depends, and further recites “…wherein the prompt further includes example code mappings showing how a previous version of code, which is uncompliant with certain policy, is refactored into a new version of code, which is compliant with the certain policy” which, conceptually, with broadest reasonable interpretation, provides further clarification as to the abstract idea/mental process/prompt/natural language input provided to language model/etc., which does not integrate the abstract idea into a practical application and does not amount to significantly more than the abstract idea/judicial exception. As such, claim 16 fails to correct the deficiencies of claim 11, and is therefore rejected for similar reasoning as claim 11, above. As per claim 17, it incorporates the deficiencies of claim 11, upon which it depends, and further recites “…wherein the output further includes a rationale associated with the proposed rewritten version of the code snippet, the rationale including a reason as to why the LLM generated the output” which, conceptually, with broadest reasonable interpretation, provides further clarification as to the insignificant extra solution activity of updating/modifying data/information/refactoring input code into output code/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting, and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 17 fails to correct the deficiencies of claim 11, and is therefore rejected for similar reasoning as claim 11, above. As per claim 18, it recites a method having similar limitations as the method of claim 1 and the computer system of claim 11, and as such recites a similar abstract idea/mental process and has similar deficiencies as claims 1 and 11, and is therefore rejected for similar reasoning as claim 1, above. Claim 11 recites the further additional elements/limitations “…wherein the output includes…(ii) an indication that the LLM is unable to either (a) fix the code snippet so that the code snippet calls the compliant library or (b) fix the code snippet so that the code snippet is compliant with the policy” which, conceptually, with broadest reasonable interpretation, provides further clarification as to the insignificant extra solution activities of updating/modifying/etc. data/information/code/etc. and displaying data/information/output/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting, and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). Accordingly, the additional elements/limitations of claim 18 fail to correct the deficiencies of claim 1, and therefore claim 18 is rejected for similar reasoning as claim 1, above. As per claim 19, it incorporates the deficiencies of claim 11, upon which it depends, and further recites “…wherein the LLM prompt operates to constrain the LLM to design the proposed rewritten version of the code snippet based on the compliant library” which, conceptually, with broadest reasonable interpretation, provides further clarification as to the insignificant extra solution activity of updating/modifying/designing/rewriting/etc. data/information/code/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting, and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). As such, claim 19 fails to correct the deficiencies of claim 18, and is therefore rejected for similar reasoning as claim 18, above. As per claim 20, it incorporates the deficiencies of claim 11, upon which it depends, and further recites “…wherein the code snippet is included in a codebase, and wherein the output of the LLM is displayed while the codebase is still under development” which, conceptually, with broadest reasonable interpretation, provides further clarification as to the insignificant extra solution activities of storing data/information/including code in codebase/etc. and displaying data/information/output/etc., which does not integrate the abstract idea into a practical application and the courts have identified functions such as gathering, displaying, updating, transmitting, and storing data as well-understood, routine, conventional activity, thus do not amount to significantly more than the judicial exception (see MPEP 2106.05(d)). Accordingly, the additional elements/limitations of claim 20 fail to correct the deficiencies of claim 18, and therefore claim 20 is rejected for similar reasoning as claim 18, above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 7, 9, 11-14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Alamir et al. (herein called Alamir) (US PG Pub. 2022/0244938 A1) and Prasad et al. (herein called Prasad) (US PG Pub. 2022/0244937 A1). As per claim 1, Alamir teaches: a method for intelligently prompting a large language model (LLM) to refactor code, said method comprising: accessing a code snippet, which is identified as potentially comprising a reference to an out-of-compliance library (pars. [0072]-[0074], [0078], [0090], [0092]-[0093], [0095]-[0096], [0109], code is updated to fix bugs/security holes, improve performance, provide new features, etc. and code updates may be updating/changing/etc. library to new version of library, and when code library requires an update (out of compliance library/library no longer complies with desired version/features/has bugs/etc.) code modules/scripts/etc. (code snippet) impacted by library update/having dependency with library to be updated/code functions deprecated due to library update affecting their input or output arguments/etc. (code snippets/modules/scripts/etc. identified as comprising a reference/dependency/etc. to out of compliance library/library being updated to new version/etc.) are identified (accessed).); generating context for the code snippet (pars. [0071], [0074], [0092], [0094], graph of dependencies between code modules/scripts/snippet and release notes of code modules/scripts/snippet (context for code snippets/modules) are generated/obtained/scanned/etc. (generate context/dependency graph/release notes/etc. for code snippet/module/script) and used to update code/fix library/modify code/etc..); wherein the prompt instructs the artificial intelligence to refactor the code snippet into modified code, which calls a compliant library (pars. [0077]-[0078], [0088], [0112]-[0115], artificial intelligence techniques are used to update code to use updated/new version of/etc. library (refactor code snippet into modified/updated code which calls compliant library/uses new version of library), recommended replacement/updated/etc. code module/script/etc. is provided/displayed/etc. to users for approval, and user approves code updates/clicks update code button/etc. (prompt) causing artificial intelligence to update/refactor code modules/scripts/code snippet into modified/updated code which calls compliant library/has dependency on new version of library/etc. (prompt/user approval/user input/user clicking update button/etc. instructs artificial intelligences to refactor code snippet into modified code snippet/update code to use new library version/etc.).); and displaying output of the artificial intelligence based on the artificial intelligence operating in response to the prompt, wherein the output includes a proposed rewritten version of the code snippet (pars. [0075], [0077], [0088], [0113], artificial intelligence techniques are used to update code/code module/etc. (output of artificial intelligence includes proposed rewritten version of code snippet/updated code module/script/etc.) and code updates/replacements/etc. are recommended to users for approval/when human intervention is needed message is transmitted to user/code to be updated is displayed to user/etc. (display output of the artificial intelligence including a proposed rewritten version of the code snippet/recommended updated code module/recommended updated code script/etc. based on the artificial intelligence operating in response to the prompt.). While Alamir teaches that artificial intelligence/machine learning/natural language parsing/etc. may be used to update code (ex: pars. [0050], [0070], [0077], etc.), it does not explicitly state, however Prasad teaches: building an LLM prompt that will be fed to the LLM, wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code (pars. [0014]-[0016], [0026], [0046], [0081]-[0084], requirement data identifying modification to software code, software code, etc. are received by/input to/etc. system/machine learning model/etc. (fed to machine learning model/LLM) from user (prompt/requirement data and software code/etc. is built/provided by/received from/etc. user/) in natural language/text/etc. and are used by machine learning model to modify software code (refactor/modify code snippet into modified code), and machine learning model may be neural network algorithm that performs natural language processing/NLP on received requirement data/code/prompt/etc. and perform modifications to software code. As the machine learning model is neural network that processes natural language/performs NLP/etc. to determine requirements and perform modifications of software code and par. [0021] of the specification of this application recites that “…LLM 110 is a type of neural network…”, it is obvious that the machine learning model/neural network that processes natural language may be a large language model/LLM, and as such the prompt/requirement data and software code/etc. input to/received by/fed to/etc. the machine learning model/neural network may be an LLM prompt that will be fed to the LLM that instructs the LLM to refactor the code snippet into modified code.); and displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet (pars. [0026], [0028]-[0029], machine learning model/developer system/etc. modifies code based on request/requirements/prompt (based on LLM/machine learning model/etc. operating in response to the LLM prompt) and modified software code (output of the LLM/machine learning model/neural network/etc. that includes proposed rewritten version of the code snippet/modified code/etc.) is displayed to user for feedback before implementation (displaying output of the LLM/proposed rewritten version of code snippet/code modified by machine learning model for user feedback/etc.).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add building an LLM prompt that will be fed to the LLM, wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code; and displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet, as conceptually taught by Prasad, into that of Alamir because these modifications allow for known machine learning/artificial intelligence/etc. techniques of large language model/neural network performing language processing/etc. to be used as the machine learning/artificial intelligence/etc. used to modify/update/refactor/etc. the code/software/etc., which is desirable as it provides an effective and efficient of generating updated/modified code thereby helping to ensure that the modified/updated/refactored code operates correctly/as intended/etc. without requiring a user/developer to make the modifications/perform the refactoring/etc. manually thereby saving time and resources that a developer would spend manually updating/refactoring/modifying the code/software/snippets/etc.. As per claim 7, Alamir further teaches: wherein the context includes function information for a function associated with the code snippet or class information for a class associated with the code snippet or file information for a file that is called by the code snippet (pars. [0015], [0074], [0087], release notes (context) are scanned/parsed/etc. using natural language processing and are used to determine code modules impacted, functions that will be changed, areas of code requiring updating, etc.. As the release notes/context are scanned/parsed/etc. using NLP to determine functions that will be changed it is obvious that the context/release notes includes function information for a function associated with the code snippet/functions that will be changed during updating library/etc. which is determined using the NLP.). As per claim 9, Alamir further teaches: wherein accessing the code snippet, which is identified as potentially comprising the reference to the out-of-compliance library, includes: parsing a codebase comprising the code snippet, resulting in generation of parsed data (pars. [0005], [0070]-[0071], [0092], software code is stored in codebase and artificial intelligence techniques are used to generate a network graph indicating dependencies between code modules/libraries/scripts/code snippet in the codebase (codebase is parsed/analyzed/AI techniques used/etc. to determine code modules/libraries/code snippets/etc. and generate dependency graph of modules/scripts/libraries/snippets in codebase). As the code modules are moduels/snippets/etc. of software code stored in the codebase and a dependency graph is made for the modules/snippets/etc. it is obvious that the code modules of the software code stored in the codebase are determined/parsed/etc., and as such a codebase comprising the code snippet is parsed/analyzed/processed by AI/etc. resulting in generation of parsed data/code modules/etc. used in generating the dependency graph.); inserting the parsed data into a dependency graph, which includes version data for a set of libraries (pars. [0071]-[0072], [0087], [0092], network graph indicating dependencies between code modules/scripts/libraries/etc. (parsed data) within codebase is generated (insert parsed data into dependency graph) and determination is made using graph that particular code module/library needs to be updated to newly released version (includes version data/library version data/etc. for set of libraries/libraries in codebase/etc.).); and determining, based on the dependency graph, that the code snippet does include the reference to the out-of-compliance library (pars. [0014], [0073], [0093], network graph indicating dependencies between code modules in codebase is used/traversed/etc. to determine code modules/scripts/code snippet impacted when library/out of compliance library/etc. is updated/including reference/dependency to the out of compliance library/library to be updated/etc..). As per claim 11, Alamir teaches: a method for intelligently prompting a large language model (LLM) to refactor code, said method comprising: access a code snippet, which is identified as potentially comprising code that is uncompliant with a policy; (pars. [0072]-[0074], [0078], [0090], [0092]-[0093], [0095]-[0096], [0109], code is updated to fix bugs/security holes, improve performance, provide new features, etc. and code updates may be updating/changing/etc. library to new version of library, and when code library requires an update (code is uncompliant with a policy/library no longer complies with desired version/features/has bugs/security holes/etc.) code modules/scripts/etc. (code snippet) impacted by library update/having dependency with library to be updated/code functions deprecated due to library update affecting their input or output arguments/etc. (code snippets/modules/scripts/etc. identified as comprising a reference/dependency/etc. to out of compliance library/library being updated to new version/etc.) are identified (accessed).); generating context for the code snippet (pars. [0071], [0074], [0092], [0094], graph of dependencies between code modules/scripts/snippet and release notes of code modules/scripts/snippet (context for code snippets/modules) are generated/obtained/scanned/etc. (generate context/dependency graph/release notes/etc. for code snippet/module/script) and used to update code/fix library/modify code/etc..); wherein the prompt instructs the artificial intelligence to refactor the code snippet into modified code, which is designed to be compliant with the policy (pars. [0077]-[0078], [0088], [0112]-[0115], artificial intelligence techniques are used to update code to use updated/new version of/etc. library that fixes bugs/security holes/improve performance/new features/etc. (refactor code snippet into modified/updated code which is compliant with policy), recommended replacement/updated/etc. code module/script/etc. is provided/displayed/etc. to users for approval, and user approves code updates/clicks update code button/etc. (prompt) causing artificial intelligence to update/refactor code modules/scripts/code snippet into modified/updated code which is compliant with policy/has dependency on new version of library compliant with policy/etc. (prompt/user approval/user input/user clicking update button/etc. instructs artificial intelligences to refactor code snippet into modified code snippet/update code to use new library version/etc.).); and displaying output of the artificial intelligence based on the artificial intelligence operating in response to the prompt, wherein the output includes a proposed rewritten version of the code snippet (pars. [0075], [0077], [0088], [0113], artificial intelligence techniques are used to update code/code module/etc. (output of artificial intelligence includes proposed rewritten version of code snippet/updated code module/script/etc.) and code updates/replacements/etc. are recommended to users for approval/when human intervention is needed message is transmitted to user/code to be updated is displayed to user/etc. (display output of the artificial intelligence including a proposed rewritten version of the code snippet/recommended updated code module/recommended updated code script/etc. based on the artificial intelligence operating in response to the prompt.). While Alamir teaches that artificial intelligence/machine learning/natural language parsing/etc. may be used to update code (ex: pars. [0050], [0070], [0077], etc.), it does not explicitly state, however Prasad teaches: building an LLM prompt that will be fed to the LLM, wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code (pars. [0014]-[0016], [0026], [0046], [0081]-[0084], requirement data identifying modification to software code, software code, etc. are received by/input to/etc. system/machine learning model/etc. (fed to machine learning model/LLM) from user (prompt/requirement data and software code/etc. is built/provided by/received from/etc. user/) in natural language/text/etc. and are used by machine learning model to modify software code (refactor/modify code snippet into modified code), and machine learning model may be neural network algorithm that performs natural language processing/NLP on received requirement data/code/prompt/etc. and perform modifications to software code. As the machine learning model is neural network that processes natural language/performs NLP/etc. to determine requirements and perform modifications of software code and par. [0021] of the specification of this application recites that “…LLM 110 is a type of neural network…”, it is obvious that the machine learning model/neural network that processes natural language may be a large language model/LLM, and as such the prompt/requirement data and software code/etc. input to/received by/fed to/etc. the machine learning model/neural network may be an LLM prompt that will be fed to the LLM that instructs the LLM to refactor the code snippet into modified code.); and displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet (pars. [0026], [0028]-[0029], machine learning model/developer system/etc. modifies code based on request/requirements/prompt (based on LLM/machine learning model/etc. operating in response to the LLM prompt) and modified software code (output of the LLM/machine learning model/neural network/etc. that includes proposed rewritten version of the code snippet/modified code/etc.) is displayed to user for feedback before implementation (displaying output of the LLM/proposed rewritten version of code snippet/code modified by machine learning model for user feedback/etc.).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add building an LLM prompt that will be fed to the LLM, wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code; and displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet, as conceptually taught by Prasad, into that of Alamir because these modifications allow for known machine learning/artificial intelligence/etc. techniques of large language model/neural network performing language processing/etc. to be used as the machine learning/artificial intelligence/etc. used to modify/update/refactor/etc. the code/software/etc., which is desirable as it provides an effective and efficient of generating updated/modified code thereby helping to ensure that the modified/updated/refactored code operates correctly/as intended/etc. without requiring a user/developer to make the modifications/perform the refactoring/etc. manually thereby saving time and resources that a developer would spend manually updating/refactoring/modifying the code/software/snippets/etc.. As per claim 12, Alamir further teaches: wherein the output further includes a selectable option to accept the output into a codebase comprising the code snippet or, alternatively, to reject the output such that the proposed rewritten version of the code snippet is prevented from being included in the codebase (fig. 8, pars. [0088], [0113], [0115], updated code/functions to be updated (output) is displayed to user/recommended to user for approval/etc. on user interface which includes an “update code” which user clicks (selectable option) to cause artificial intelligence to update the code/perform the code update/etc. (accept the output into a codebase comprising the code snippet/code).). As per claim 13, Alamir further teaches: wherein the proposed rewritten version of the code snippet is automatically incorporated into a codebase such that the proposed rewritten version of the code snippet replaces the code snippet in the codebase (pars. [0072], [0074]-[0075], [0088], particular code/library/code snippet/etc. is to be updated and artificial intelligence is used to determine code modules/library/etc. to be updated and automatically perform code/module/snippet update in the code/codebase/etc. (proposed rewritten version of the code snippet/updated code module/updated library/etc. is automatically incorporated into a code base/code module in code is automatically updated/etc. such that the proposed rewritten version of the code snippet/updated code module/new version of the library/etc. replaces the code snippet in the codebase/existing code module is updated/library is replaced with new version/etc.).). As per claim 14, Alamir further teaches: wherein the output is displayed proximately to the code snippet (pars. [0112], [0113], code to be updated/functions to be updated/etc. are displayed (output is displayed) in user interface with code/scripts/pieces of code/code snippets that are detected to be deprecated/code to be updated (with the code snippet to be updated).). As per claim 18, Alamir teaches: a method for intelligently prompting a large language model (LLM) to refactor code, said method comprising: accessing a code snippet, which is identified as potentially comprising at least one of (i) a reference to an out-of-compliance library or (ii) code that is uncompliant with a policy; (pars. [0072]-[0074], [0078], [0090], [0092]-[0093], [0095]-[0096], [0109], code is updated to fix bugs/security holes, improve performance, provide new features, etc. and code updates may be updating/changing/etc. library to new version of library, and when code library requires an update (code has reference to out of compliance library/is uncompliant with a policy/library no longer complies with desired version/features/has bugs/security holes/etc.) code modules/scripts/etc. (code snippet) impacted by library update/having dependency with library to be updated/code functions deprecated due to library update affecting their input or output arguments/etc. (code snippets/modules/scripts/etc. identified as comprising a reference/dependency/etc. to out of compliance library/library being updated to new version/etc.) are identified (accessed).); generating context for the code snippet (pars. [0071], [0074], [0092], [0094], graph of dependencies between code modules/scripts/snippet and release notes of code modules/scripts/snippet (context for code snippets/modules) are generated/obtained/scanned/etc. (generate context/dependency graph/release notes/etc. for code snippet/module/script) and used to update code/fix library/modify code/etc..); wherein the prompt instructs the artificial intelligence to refactor the code snippet into modified code, which either (i) calls a compliant library or (ii) is compliant with the policy (pars. [0077]-[0078], [0088], [0112]-[0115], artificial intelligence techniques are used to update code to use updated/new version of/etc. library that fixes bugs/security holes/improve performance/new features/etc. (refactor code snippet into modified/updated code which calls/uses compliant library/is compliant with policy), recommended replacement/updated/etc. code module/script/etc. is provided/displayed/etc. to users for approval, and user approves code updates/clicks update code button/etc. (prompt) causing artificial intelligence to update/refactor code modules/scripts/code snippet into modified/updated code which calls compliant library/is compliant with policy/has dependency on new version of library compliant with policy/etc. (prompt/user approval/user input/user clicking update button/etc. instructs artificial intelligences to refactor code snippet into modified code snippet/update code to use new library version/etc.).); and displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes at least one of (i) a proposed rewritten version of the code snippet or (ii) an indication that the LLM is unable to either (a) fix the code snippet so that the code snippet calls the compliant library or (b) fix the code snippet so that the code snippet is compliant with the policy (pars. [0075], [0077], [0088], [0113], artificial intelligence techniques are used to update code/code module/etc. (output of artificial intelligence includes proposed rewritten version of code snippet/updated code module/script/etc.) and code updates/replacements/etc. are recommended to users for approval/when human intervention is needed message is transmitted to user/code to be updated is displayed to user/etc. (display output of the artificial intelligence including a proposed rewritten version of the code snippet/recommended updated code module/recommended updated code script/etc. based on the artificial intelligence operating in response to the prompt.). While Alamir teaches that artificial intelligence/machine learning/natural language parsing/etc. may be used to update code (ex: pars. [0050], [0070], [0077], etc.), it does not explicitly state, however Prasad teaches: building an LLM prompt that will be fed to the LLM, wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code (pars. [0014]-[0016], [0026], [0046], [0081]-[0084], requirement data identifying modification to software code, software code, etc. are received by/input to/etc. system/machine learning model/etc. (fed to machine learning model/LLM) from user (prompt/requirement data and software code/etc. is built/provided by/received from/etc. user/) in natural language/text/etc. and are used by machine learning model to modify software code (refactor/modify code snippet into modified code), and machine learning model may be neural network algorithm that performs natural language processing/NLP on received requirement data/code/prompt/etc. and perform modifications to software code. As the machine learning model is neural network that processes natural language/performs NLP/etc. to determine requirements and perform modifications of software code and par. [0021] of the specification of this application recites that “…LLM 110 is a type of neural network…”, it is obvious that the machine learning model/neural network that processes natural language may be a large language model/LLM, and as such the prompt/requirement data and software code/etc. input to/received by/fed to/etc. the machine learning model/neural network may be an LLM prompt that will be fed to the LLM that instructs the LLM to refactor the code snippet into modified code.); and displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes at least one of (i) a proposed rewritten version of the code snippet or (pars. [0026], [0028]-[0029], machine learning model/developer system/etc. modifies code based on request/requirements/prompt (based on LLM/machine learning model/etc. operating in response to the LLM prompt) and modified software code (output of the LLM/machine learning model/neural network/etc. that includes proposed rewritten version of the code snippet/modified code/etc.) is displayed to user for feedback before implementation (displaying output of the LLM/proposed rewritten version of code snippet/code modified by machine learning model for user feedback/etc.).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add building an LLM prompt that will be fed to the LLM, wherein the LLM prompt instructs the LLM to refactor the code snippet into modified code; and displaying output of the LLM based on the LLM operating in response to the LLM prompt, wherein the output includes a proposed rewritten version of the code snippet, as conceptually taught by Prasad, into that of Alamir because these modifications allow for known machine learning/artificial intelligence/etc. techniques of large language model/neural network performing language processing/etc. to be used as the machine learning/artificial intelligence/etc. used to modify/update/refactor/etc. the code/software/etc., which is desirable as it provides an effective and efficient of generating updated/modified code thereby helping to ensure that the modified/updated/refactored code operates correctly/as intended/etc. without requiring a user/developer to make the modifications/perform the refactoring/etc. manually thereby saving time and resources that a developer would spend manually updating/refactoring/modifying the code/software/snippets/etc.. As per claim 19, Alamir further teaches: wherein the LLM prompt operates to constrain the LLM to design the proposed rewritten version of the code snippet based on the compliant library (pars. [0072]-[0075], [0078], [0112]-[0115], update to be made to code may be to update library to new version (compliant library)/user may select library to update to new version/information indicated that code module/library needs to be updated to new compliant version/etc., portions of code impacted by updating library to new version are determined, user prompts/clicks button/initiates/etc. updating of library and code impacted by updating library to new version, and code modules/library/etc. are updated/rewritten/etc. (LLM prompt constrains LLM to design proposed rewritten version of code snippet based on compliant library/prompt causes artificial intelligence to update code modules/libraries/snippets based on new version of library/etc.).). As per claim 20, Alamir further teaches: wherein the code snippet is included in a codebase, and wherein the output of the LLM is displayed while the codebase is still under development (pars. [0050], [0070]-[0075], [0088], [0113], code/code modules/code snippet/etc. is stored in shared code base where it is maintained and updated (code snippet is included in a codebase) and artificial intelligence/LLM determines and recommends code updates/output to user/displays outputs to user/etc. (output of LLM/recommended code updates by artificial intelligence/etc. is displayed) and code in codebase is updated to updated code/code updates are implemented on code in codebase/etc. As the code/code snippet/code module/etc. in the codebase is updated and maintained in the codebase and code updates are implemented on code in the codebase it is obvious that the codebase is still under development/code in codebase is still being updated/developed/maintained/etc.). Claims 2-3 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Alamir et al. (herein called Alamir) (US PG Pub. 2022/0244938 A1) and Prasad et al. (herein called Prasad) (US PG Pub. 2022/0244937 A1) in further view of Zhou et al. (herein called Zhou) (US PG Pub. 2019/0324731 A1). As per claim 2, while Alamir teaches that an existing library is updated to a selected new library as part of maintaining/updating software and that the prompt contains notification of the library to be updated/updated library/etc., Alamir and Prassad do not explicitly state, however Zhou teaches: wherein the method further includes generating one or more mappings that map the out-of-compliance library to a library that is determined to be compliant, and wherein the prompt is structured to include the one or more mappings (pars. [0016]-[0017], [0028], [0034], updating/maintaining software includes filtering out functions not relevant to the update of old software and providing relevant functions for update which may include identifying correlations between (mapping) functions in old software libraries and new functions in software libraries which may include parsing legacy/current/etc. software/libraries/etc. into individual elements/trees/abstract syntax trees/subtrees/etc. and update software libraries/new libraries/external libraries/libraries of replacement software/etc. into individual elements/trees/etc. and comparing elements/functions/etc. of the trees to determine corresponding/similar/etc. libraries/functions/etc. (generating one or more mappings that map the out-of-compliance library to a library that is determined to be compliant), and recommending/providing relevant/etc. libraries/functions/etc. for implementation (prompt includes mappings to compliant library/function/etc.).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the method further includes generating one or more mappings that map the out-of-compliance library to a library that is determined to be compliant, and wherein the prompt is structured to include the one or more mappings, as conceptually taught by Zhou, into that of Alamir and Prassad because these modifications allow for an effective method of determining updated/compliant/etc. library to replace current/out of compliance library thereby helping to ensure that the library is updated/changed/etc. to a desired library that functions correctly/as desired, thereby helping to prevent errors and ensure that the software/code/application/program operates correctly/as desired. As per claim 3, Alamir further teaches: wherein the library that is determined to be compliant is the same as the compliant library that is called by the modified code (pars. [0072], [0075], [0078], [0090], [0112], [0115], input/prompt/etc. causing upgrade to be performed includes specification of desired upgrade version of library (compliant library) wanted for upgrade and upgrade is performed upgrading library to specified new version/replacing library with selected new version/etc. (library determined to be compliant/selected new version of library/etc. is same as compliant library called by the modified code/code is upgraded to use selected new version of library/etc.).). As per claim 10, while Alamir teaches generating a dependency graph for software modules/snippets/etc. of software code stored in codebase, it does not explicitly state, however Zhou teaches: wherein parsing the codebase includes parsing a manifest associated with the codebase, parsing a lockfile associated with the codebase, or parsing metadata associated with the codebase (pars. [0016]-[0017], [0039]-[0040], individual elements/library/etc. of software to be updated/legacy software is determined by developing abstract syntax tree/AST (metadata) for software to be updated and parsing AST/metadata to identify subtrees representative of individual functions/libraries/elements/etc. to be updated.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add wherein parsing the codebase includes parsing a manifest associated with the codebase, parsing a lockfile associated with the codebase, or parsing metadata associated with the codebase, as conceptually taught by Zhou, into that of Alamir and Prassad because these modifications allow for an effective method of identifying/determining/etc. modules/snippets/libraries/etc. of code to be updated/replaced/etc. thereby helping to ensure that desired library/module/snippet/etc. is updated/changed/etc. to a desired library that functions correctly/as desired, thereby helping to prevent errors and ensure that the software/code/application/program operates correctly/as desired. Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Alamir et al. (herein called Alamir) (US PG Pub. 2022/0244938 A1) and Prasad et al. (herein called Prasad) (US PG Pub. 2022/0244937 A1) in further view of Arumugam Selvaraj et al. (herein called Arumugam) (US PG Pub. 2023/0418565 A1). As per claim 5, while Alamir and Prasad teach providing context/information/etc. to the LLM in the prompt/request/etc., they do not explicitly state, however Arumugam teaches: wherein the context includes content obtained from a selected number of lines of code preceding the code snippet (pars. [0020], [0031], [0034], [0043]-[0044], machine learning/artificial intelligence/etc. is used to provide code suggestions/recommendations/etc. upon demand/request/prompt/etc. and code recommendation/suggestion/etc. may be made based on obtained/extracted/etc. context/code file information/etc. which may be limited to desired window such as a context window of tokens/N previous tokens prior to code location from code file, all class level information, all function level information, etc. (selected number of lines of code preceding the code snippet/lines of code in class/function that preceed snippet/etc.).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the context includes content obtained from a selected number of lines of code preceding the code snippet, as conceptually taught by Arumugam, into that of Alamir and Prassad because these modifications allow for a desired amount of context to be provide to the artificial intelligence/machine learning/LLM/etc. and used to update code/recommend code/etc., which is desirable as it provides additional information allowing for the AI/machine learning/LLM to update code/recommend code/etc. that is desirable/operates as desired/etc. thereby helping to reduce errors and ensure the code operates correctly/as desired, while increasing user control over the updating of code by specifying what context is to be used/considered by the AI/LLM/machine learning when updating the code thereby making it more desirable to users. As per claim 6, while Alamir and Prasad teach providing context/information/etc. to the LLM in the prompt/request/etc., they do not explicitly state, however Arumugam teaches: wherein the context includes content obtained from a selected number of lines of code succeeding the code snippet (pars. [0020], [0031], [0034], [0043]-[0044], machine learning/artificial intelligence/etc. is used to provide code suggestions/recommendations/etc. upon demand/request/prompt/etc. and code recommendation/suggestion/etc. may be made based on obtained/extracted/etc. context/code file information/etc. which may be limited to desired window such as a context window of tokens, all class level information, all function level information, etc. (selected number of lines of code succeeding the code snippet/context includes lines of code in class/function that succeed snippet/etc.).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the context includes content obtained from a selected number of lines of code succeeding the code snippet, as conceptually taught by Arumugam, into that of Alamir and Prassad because these modifications allow for a desired amount of context to be provide to the artificial intelligence/machine learning/LLM/etc. and used to update code/recommend code/etc., which is desirable as it provides additional information allowing for the AI/machine learning/LLM to update code/recommend code/etc. that is desirable/operates as desired/etc. thereby helping to reduce errors and ensure the code operates correctly/as desired, while increasing user control over the updating of code by specifying what context is to be used/considered by the AI/LLM/machine learning when updating the code thereby making it more desirable to users. Claims 8, 15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Alamir et al. (herein called Alamir) (US PG Pub. 2022/0244938 A1) and Prasad et al. (herein called Prasad) (US PG Pub. 2022/0244937 A1) in further view of Singh et al. (herein called Singh) (US PG Pub. 2023/0350657 A1). As per claim 8, Alamir and Prasad do not explicitly state, however Singh teaches: wherein the LLM is a generative pre-trained transformer type of LLM (pars. [0019], machine learning models (LLM/AI techniques/etc. from Alamir and Prasad) may be GPT/generative pre-trained transformer (is a generative pre-trained transformer type of LLM/machine learning/artificial intelligence/etc.).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the context includes content obtained from a selected number of lines of code succeeding the code snippet, as conceptually taught by Singh, into that of Alamir and Prasad because these modifications allow for a known type of machine learning/artificial intelligence/etc. to be used and the LLM/AI technique/machine learning model/etc. thereby allowing for an effective method of AI/LLM/machine learning to be used to update software which is desirable as it helps ensure that the software is correctly updated while saving time and resources that would be spent by a user manually updating the software. As per claim 15, Alamir and Prasad do not explicitly state, however Singh teaches: wherein the proposed rewritten version of the code snippet is refactored code, which functions the same as the code snippet despite having different syntax and which is now compliant with the policy (pars. [0005], [0022]-[0023], [0045], [0050], code in first programming language is translated (refactored) into candidate translations of the code in a second programming language (proposed/candidate rewritten version of code/code in second programming language is refactored/translated code having different syntax/programming language/etc.), as the candidate translated code in the second programming language is just a translation/refactoring of the code in the first programming language/syntax into the second programming language/different syntax and no edits/modification/etc. are made to the code it is obvious that the code in the second programming language/translated code/proposed rewritten version of code/etc. functions the same as the snippet/code in first programming language/etc. as it is just a translation into a different language with no edits/modification/etc. to the functionality of the code.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the proposed rewritten version of the code snippet is refactored code, which functions the same as the code snippet despite having different syntax and which is now compliant with the policy, as conceptually taught by Singh, into that of Alamir and Prasad because these modifications allow for code in a first programming language to be translated into a second programming language without changing/modifying/etc. the code, which is desirable as it allows the code to be used in multiple environments requiring different programming languages thereby increasing the usability of the code making it more desirable to users. As per claim 17, Alamir and Prasad do not explicitly state, however Singh teaches: wherein the output further includes a rationale associated with the proposed rewritten version of the code snippet, the rationale including a reason as to why the LLM generated the output (pars. [0028], [0045], [0050], machine learning/artificial intelligence/etc. may be used to translate/convert/refactor/etc. code in a first programming language into candidate translations of code/proposed rewritten version of code snippet/etc. in a second programming language (output candidate codes in second programming language), and “best” output is presented to user in ranked order based on errors/warning raised when compiling translated code/output, score of each candidate translated code based on syntax analysis/lexical analysis/etc., etc. and user selects desired candidate translation of code from presented “best” candidate translations of code/output. As the “best” candidate translations of code/output is presented in raked order based on errors/syntax analysis/etc., the order/ranking of the best candidate translations/output based on errors/syntax is a rationale as to why the output/candidate translations/etc. are presented/output for user selection/a reason as to why the LLM/machine learning/etc. generated the output for presentation to user for selection/etc..). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the output further includes a rationale associated with the proposed rewritten version of the code snippet, the rationale including a reason as to why the LLM generated the output, as conceptually taught by Singh, into that of Alamir and Prasad because these modifications allow for a user to be informed as to why output/candidate codes/etc. have been made available for selection so a user may make an informed decision when selecting a desired output/candidate code/etc. to be used, thereby helping to ensure that code selected for use by a user operates as desired by a user. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Alamir et al. (herein called Alamir) (US PG Pub. 2022/0244938 A1) and Prasad et al. (herein called Prasad) (US PG Pub. 2022/0244937 A1) in further view of Drewes (US PG Pub. 2009/0192782 A1). As per claim 16, Alamir and Prasad do not explicitly state, however Drewes teaches: wherein the prompt further includes example code mappings showing how a previous version of code, which is uncompliant with certain policy, is refactored into a new version of code, which is compliant with the certain policy (pars. [0013]-[0014], machine translation systems (machine learning/LLM/artificial intelligence/etc. from Alamir and Prasad) learns/may be trained/etc. to translate (refactor/update/modify/etc.) data in a first language (previous version of code in first language/uncompliant with certain policy/not in desired language/etc.) to a second language (new version of code compliant with certain policy/in desired language/etc.) by being provided with samples of data in the first language and its previously translated version in the second language/related data in the two/first and second/etc. language/language pairs/etc. (prompt further includes example code mappings/language pairs/samples of related data in two languages/ec. showing how a previous version of code, which is uncompliant with certain policy, (data/version of code in first language) is refactored/previously translated/etc. into a new version of code, which is compliant with the certain policy (version of code in second language).). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to add wherein the prompt further includes example code mappings showing how a previous version of code, which is uncompliant with certain policy, is refactored into a new version of code, which is compliant with the certain policy, as conceptually taught by Drewes, into that of Alamir and Prasad because these modifications allow for an effective method of training machine learning/artificial intelligence/etc. to translate/update/etc. code/program/software/etc. into desired second language/code that complies with certain policy/etc. thereby helping to ensure that the code is successfully updated/translated/etc. correctly which helps prevent errors and ensures that the updated/translated code operates correctly/as desired while saving time and resources that would be spent by a human manually updated/translating the code. Allowable Subject Matter Over Prior Art The following is a statement of reasons for the indication of allowable subject matter: The prior art of record teaches that artificial intelligence/machine learning/neural networks/large language models/etc. are used to upgrade/modify/translate/etc. old/out of date/deprecated/unsupported/buggy/out-of-compliance libraries/code modules/code snippets/etc. to updated code/new version/compliant libraries/compliant modules/etc. when prompted, that the updating may be based on selected updated versions of library, context/mappings/correlations/notes, etc., and that prompts/requests/etc. to AI/machine learning/etc. to perform updates/translations may include selection/mapping/determination/etc. of compliant/updated/etc. version of code/library/module/etc. to replace out of compliance/current/old/etc. library/module/code/etc. However, the prior art of record fails to render an obviousness of the library that is determined to be compliant is different than the compliant library that is called by the modified code, as required by dependent claim 4. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Li et al. US PG Pub. 2023/0244452 A1 teaches that neural networks/machine learning/etc. may be used to generate/synthesize etc. computer programs based on input descriptions/natural language/etc. describing the computer program to be generated. Makkar US Patent 11,074,047 B2 teaches artificial intelligence/machine learning/etc. may be used to parse source code and identify candidate code snippets and recommend library functions for code snippets. Arcadinho et al. US Patent 11,726,750 B1 teaches that machine learning/artificial intelligence/etc. may use trained models to generate/recommend computer programs/language/etc. for execution based on received input/natural language/etc. specifying tasks to be performed by computer. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS M SLACHTA whose telephone number is (571)270-0653. The examiner can normally be reached Monday-Friday 6:30am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chat Do can be reached at 571-272-3721. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DOUGLAS M SLACHTA/ Examiner, Art Unit 2193
Read full office action

Prosecution Timeline

Oct 26, 2023
Application Filed
Feb 28, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585449
ARTIFICIAL INTELLIGENCE (AI) MODEL DEPENDENCY HANDLING IN HETEROGENEOUS COMPUTING PLATFORMS
2y 5m to grant Granted Mar 24, 2026
Patent 12585463
SELECTIVE TRIGGERING OF CONTINUOUS INTEGRATION, CONTINUOUS DELIVERY (CI/CD) PIPELINES
2y 5m to grant Granted Mar 24, 2026
Patent 12572664
Using Artificial Intelligence (AI) Analysis For Identifying Potential Vulnerabilities Inserted Into Software
2y 5m to grant Granted Mar 10, 2026
Patent 12554558
SEPARATING APPLICATION PROGRAMMING INTERFACES INTO CONTAINERS TO FACILITATE SAFETY ASSURANCE
2y 5m to grant Granted Feb 17, 2026
Patent 12554487
VERSION CONTROL SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+18.3%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 340 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month