Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. This is the initial office action based on the application filed on March 28th, 2024, which claims 1-18 are presented for examination.
Status of Claims
3. Claims 1-18 are pending, of which claims, of which claim 1, 7 and 13 are in independent form.
Priority
4. No priority has been considered for this application.
Information Disclosure Statement
5. Information disclosure statement filed on 02/10/2025, has been reviewed and considered by Examiner.
The Office's Note:
6. The Office has cited particular paragraphs / columns and line numbers in the reference(s) applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim(s), other passages and figures may apply as well. It is respectfully requested from the Applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the cited passages as taught by the prior art or relied upon by the Examiner.
Claim Objections
7. Claims 2-6 objected to because of the following informalities: claims 2-6 recite “A system according to claim”, but it should be “The system according to claim”. Appropriate correction is required.
Claims 8-12 objected to because of the following informalities: claims 8-12 recite “A method according to claim”, but it should be “The method according to claim”. Appropriate correction is required.
Claims 14-18 objected to because of the following informalities: claims 14-18 recite “A medium according to claim”, but it should be “The non-transitory medium according to claim”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Wu (US 11,916,767 – hereinafter Wu – IDS of records) and further in view of Obando Chacon (US 20230385042– hereinafter Obando Chacon).
Claim 1 rejected, Wu teaches a system comprising(Wu, abstract and summary):
a memory storing executable program code (Wu, US 11916767, fig. 2, component 204 – memory); and
at least one processing unit to execute the program code to cause the system to (Wu, fig. 2, component 202 – processor):
execute a script in an execution environment including a resource agent, the script implementing a portion of a flow to receive a message from a sender and transmit the message to a receiver (Wu, column 3, line 25-52, Event information may include additional context information associated with an event, such as event source, event type, or the like. Organizations may deploy various systems may be configured to monitor various types of events depending on needs of an industry or technology area. For example, information technology services may generate events in response to one or more conditions, such as, computers going offline, memory over-utilization, CPU over-utilization, storage quotas being met or exceeded, applications failing or otherwise becoming unavailable, networking problems (e.g., latency, excess traffic, unexpected lack of traffic, intrusion attempts, or the like), electrical problems (e.g., power outages, voltage fluctuations, or the like), customer service requests, or the like, or combination thereof. Events may be provided using one or more messages, emails, telephone calls, library function calls, application programming interface (API) calls, including, any signals provided to indicate that an event has occurred. One or more third party and/or external systems may be configured to generate event messages. Column 3, line 63, “configuration information” refers to information that may include rule based policies, pattern matching, scripts (e.g., computer readable instructions), or the like, that may be provided from various sources, including, configuration files, databases, user input, built-in defaults, or the like, or combination thereof. In some cases, configuration information may include or reference information stored in other systems or services, such as, configuration management databases, Lightweight Directory Access Protocol (LDAP) servers, name services, public key infrastructure services, or the like. );
determine, from the resource agent, resource consumption data associated with resource consumption by the execution environment during execution of the script in the execution environment (Wu, column 4, line 48-55, one or more metrics may be determined for each prompt fragment based on its corresponding determined portion of the response. In some embodiments, the performance score for each prompt fragment may be determined based on the one or more metrics such that the one or more metrics include one or more of latency, correctness, resource consumption, cost, event type applicability, redundancy, or the like. Column 3, line 25-52,);
transmit a prompt to a text generation model, the prompt including the resource consumption data and the script(Wu, column 4, line 14-19, a prompt may be generated for a large language model (LLM) based on a prompt template and the one or more prompt fragments such that the one or more prompt fragments may be included in the prompt, and such that information associated with the one or more events is included in the prompt. Wu, column 3, line 13-25. Column 3, line 26-28, prompt);
Wu does not explicitly teach
receive, from the text generation model and in response to the prompt, a response indicating one or more modifications to the script;
present the one or more modifications.
However, Obando Chacon teaches
receive, from the text generation model and in response to the prompt, a response indicating one or more modifications to the script (Obando Chacon, US 20230385042, para [0085-0086], In some situations, an AI-based functionality 210 runs and suggests an edit of the source code or a project configuration change, or recommends a change to testing. In some embodiments, the response 214 requests 938 performance of the code improvement option, and implementing 816 the response includes invoking 804 the artificial intelligence functionality to perform program code analysis 806 on at least a portion of the particular program code, and the method includes at least one of the following based on a result 306 of the program code analysis: suggesting 940 an edit 404 to a source code 314 of the particular program code; suggesting 1034 a configuration 1036 change 1040; or recommending 918 a change 542 in testing 412 associated with the particular program code. Para [0098]. Para [0216], 410 computational performance of a program, e.g., processor cycles used, memory used, bandwidth used, latency, speed, power consumption, and so on. Para [0240] 526 AI-based functionality to proffer a performance 410 optimization; computational [0241] 528 performance 410 optimization, e.g., reduction in use of computational resources without sacrificing program features 1108 or failing tests 600 that were passed before the optimization. Para [0056-0057], improvement options 212 are always presented to the developer, as opposed to a system 202 proactively making changes 428 without asking the developer first. In other embodiments or configurations, some changes 428 may be made proactively based on a general consent, e.g., as an effect of a previously selected tool setting or an environment variable or a default tool setting. For example, automatic error correction and other forms of linting 532, comment analysis functionality 508, and documentation generation functionality 504 could be enabled with any changes approved in advance by default. Para [0108], Thus, suitably enhanced tools 310 may present a user 104 with opportunities for improvement in their code 130 at a time when their mindset is that of performing fixes, instead of distracting the user earlier when they are focused on implementation details. This timing matters. Finding certain issues (e.g., issues with program performance 410, program behavioral accuracy 1116, program security 1124, or programming style 1118), and displaying potential fixes during program writing time, instead of following the present teachings, could be very distracting to the program writer.); and
present the one or more modifications(Obando Chacon, para [0085-0086], an AI-based functionality 210 runs and suggests an edit to the source code, and the source code is edited accordingly, and the tool 210 recommends changing the testing to avoid regressing 948 that edit. In some embodiments, the response 214 requests 938 performance of the code improvement option, and implementing 816 the response includes invoking 804 the artificial intelligence functionality to perform program code analysis 806 on at least a portion of the particular program code, and the method includes: noting 942 an edit 404 to a source code 314 of the particular program code, the edit based at least in part on a result 306 of the program code analysis; and recommending 918 a change 542 in testing 412 associated with the particular program code, the change in testing configured to check for regression 948 of the edit. . Para [0098]. Para [0216], 410 computational performance of a program, e.g., processor cycles used, memory used, bandwidth used, latency, speed, power consumption, and so on. Para [0240] 526 AI-based functionality to proffer a performance 410 optimization; computational [0241] 528 performance 410 optimization, e.g., reduction in use of computational resources without sacrificing program features 1108 or failing tests 600 that were passed before the optimization. Para [0056-0057], improvement options 212 are always presented to the developer, as opposed to a system 202 proactively making changes 428 without asking the developer first. In other embodiments or configurations, some changes 428 may be made proactively based on a general consent, e.g., as an effect of a previously selected tool setting or an environment variable or a default tool setting. For example, automatic error correction and other forms of linting 532, comment analysis functionality 508, and documentation generation functionality 504 could be enabled with any changes approved in advance by default. Para [0108], Thus, suitably enhanced tools 310 may present a user 104 with opportunities for improvement in their code 130 at a time when their mindset is that of performing fixes, instead of distracting the user earlier when they are focused on implementation details. This timing matters. Finding certain issues (e.g., issues with program performance 410, program behavioral accuracy 1116, program security 1124, or programming style 1118), and displaying potential fixes during program writing time, instead of following the present teachings, could be very distracting to the program writer.).
It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Obando Chacon into Wu to effectively employ artificial intelligence (Al) functionalities, recognizes developer intentions, and promotes efficient development efforts. The method promotes efficient software development by automatically recognizing a developer's intention to improve existing code and offering the developer Al-based analyses of the code.as suggested by Obando Chacon (See abstract and conclusion).
The Office notes that Obando Chacon also teaches
a script, the script implementing a portion of a flow to receive a message from a sender and transmit the message to a receiver(Obando Chacon, fig. 1, application 122, program 132, code 130, and para [0023-0024], During a design stage 1106, the main features 1108 and the user experience 1110 of a program are designed. For instance, a design for a camera application 122 might specify that the application can run on a smartphone or a tablet with a graphical user interface that allows a user to remotely activate a camera by wireless communication and also allows the user to download pictures from the camera to the smartphone or tablet. This is merely one example application, from a universe of many possible application programs. During the design stage 1106, a developer's priorities focus on specifying the program's features 1108 and user experience 1110, without regard to any particular programming language, data structure, programming style, source code version control tool, and so on.)
Claim 2 is rejected for the reasons set forth hereinabove for claim 1, Wu and Obando Chacon teach the system according to Claim 1, wherein the prompt comprises a system prompt and a user prompt, the at least one processing unit to execute the program code to cause the system to (Wu, column 3, line 13-25. Column 3, line 26-28, prompt.):
determine the system prompt from a plurality of system prompts based on the resource consumption data (Wu, column 3, line 13-25, “large language model,” or “LLM” refer to data structures, programs, or the like, that may be trained or designed to perform a variety of natural language processing tasks. Typically, LLMs may generate text responses in response to text based prompts. Often, LLMs may be considered to be neural networks that have been trained on large collections of natural language source documents. Accordingly, in some cases, LLMs may be trained to generate predictive responses based on provided prompts. LLM prompts may include context information, examples, or the like, that may enable LLMs to generate responses directed to specific queries or particular problems that go beyond conventional NLP.).
Claim 3 is rejected for the reasons set forth hereinabove for claim 2, Wu and Obando Chacon teach a system according to Claim 2, the at least one processing unit to execute the program code to cause the system to (Wu, column 3, line 13-25):
determine context data based on the resource consumption data (Wu, column 3, line 13-25, “large language model,” or “LLM” refer to data structures, programs, or the like, that may be trained or designed to perform a variety of natural language processing tasks. Typically, LLMs may generate text responses in response to text based prompts. Often, LLMs may be considered to be neural networks that have been trained on large collections of natural language source documents. Accordingly, in some cases, LLMs may be trained to generate predictive responses based on provided prompts. LLM prompts may include context information, examples, or the like, that may enable LLMs to generate responses directed to specific queries or particular problems that go beyond conventional NLP. Wu, fig. 11 and column 27, line 17 to 45.); and
populate the system prompt with the context data (Wu, fig. 11 and column 27, line 46 to 60 , analysis engines may generate a prompt that includes the event information and provide the initial prompt to a management agent that may submit the prompt to a large language model. Fig. 12 and column 29, line 60 to column 30, line 7.).
Claim 4 is rejected for the reasons set forth hereinabove for claim 1, Wu and Obando Chacon teach a system according to Claim 1, the at least one processing unit to execute the program code to cause the system to(Wu, column 3, line 13-25):
determine context data based on the resource consumption data(Wu, column 3, line 13-25, “large language model,” or “LLM” refer to data structures, programs, or the like, that may be trained or designed to perform a variety of natural language processing tasks. Typically, LLMs may generate text responses in response to text based prompts. Often, LLMs may be considered to be neural networks that have been trained on large collections of natural language source documents. Accordingly, in some cases, LLMs may be trained to generate predictive responses based on provided prompts. LLM prompts may include context information, examples, or the like, that may enable LLMs to generate responses directed to specific queries or particular problems that go beyond conventional NLP. Wu, fig. 11 and column 27, line 17 to 45.); and
populate the prompt with the context data(Wu, fig. 11 and column 27, line 46 to 60 , analysis engines may generate a prompt that includes the event information and provide the initial prompt to a management agent that may submit the prompt to a large language model. Fig. 12 and column 29, line 60 to column 30, line 7.).
Claim 5 is rejected for the reasons set forth hereinabove for claim 1, Wu and Obando Chacon teach a system according to Claim 1, wherein presentation of the one or more modifications comprises presentation of a description of a code error(Obando Chacon, para [0085-0086], an AI-based functionality 210 runs and suggests an edit to the source code, and the source code is edited accordingly, and the tool 210 recommends changing the testing to avoid regressing 948 that edit. In some embodiments, the response 214 requests 938 performance of the code improvement option, and implementing 816 the response includes invoking 804 the artificial intelligence functionality to perform program code analysis 806 on at least a portion of the particular program code, and the method includes: noting 942 an edit 404 to a source code 314 of the particular program code, the edit based at least in part on a result 306 of the program code analysis; and recommending 918 a change 542 in testing 412 associated with the particular program code, the change in testing configured to check for regression 948 of the edit. . Para [0098]. Para [0216], 410 computational performance of a program, e.g., processor cycles used, memory used, bandwidth used, latency, speed, power consumption, and so on. Para [0240] 526 AI-based functionality to proffer a performance 410 optimization; computational [0241] 528 performance 410 optimization, e.g., reduction in use of computational resources without sacrificing program features 1108 or failing tests 600 that were passed before the optimization. Para [0056-0057], improvement options 212 are always presented to the developer, as opposed to a system 202 proactively making changes 428 without asking the developer first. In other embodiments or configurations, some changes 428 may be made proactively based on a general consent, e.g., as an effect of a previously selected tool setting or an environment variable or a default tool setting. For example, automatic error correction and other forms of linting 532, comment analysis functionality 508, and documentation generation functionality 504 could be enabled with any changes approved in advance by default. Para [0108], Thus, suitably enhanced tools 310 may present a user 104 with opportunities for improvement in their code 130 at a time when their mindset is that of performing fixes, instead of distracting the user earlier when they are focused on implementation details. This timing matters. Finding certain issues (e.g., issues with program performance 410, program behavioral accuracy 1116, program security 1124, or programming style 1118), and displaying potential fixes during program writing time, instead of following the present teachings, could be very distracting to the program writer.).
Claim 6 is rejected for the reasons set forth hereinabove for claim 1, Wu and Obando Chacon teach a system according to Claim 1, wherein presentation of the one or more modifications comprises presentation of a modified version of the script(Obando Chacon, para [0085-0086], an AI-based functionality 210 runs and suggests an edit to the source code, and the source code is edited accordingly, and the tool 210 recommends changing the testing to avoid regressing 948 that edit. In some embodiments, the response 214 requests 938 performance of the code improvement option, and implementing 816 the response includes invoking 804 the artificial intelligence functionality to perform program code analysis 806 on at least a portion of the particular program code, and the method includes: noting 942 an edit 404 to a source code 314 of the particular program code, the edit based at least in part on a result 306 of the program code analysis; and recommending 918 a change 542 in testing 412 associated with the particular program code, the change in testing configured to check for regression 948 of the edit. . Para [0098]. Para [0216], 410 computational performance of a program, e.g., processor cycles used, memory used, bandwidth used, latency, speed, power consumption, and so on. Para [0240] 526 AI-based functionality to proffer a performance 410 optimization; computational [0241] 528 performance 410 optimization, e.g., reduction in use of computational resources without sacrificing program features 1108 or failing tests 600 that were passed before the optimization. Para [0056-0057], improvement options 212 are always presented to the developer, as opposed to a system 202 proactively making changes 428 without asking the developer first. In other embodiments or configurations, some changes 428 may be made proactively based on a general consent, e.g., as an effect of a previously selected tool setting or an environment variable or a default tool setting. For example, automatic error correction and other forms of linting 532, comment analysis functionality 508, and documentation generation functionality 504 could be enabled with any changes approved in advance by default. Para [0108], Thus, suitably enhanced tools 310 may present a user 104 with opportunities for improvement in their code 130 at a time when their mindset is that of performing fixes, instead of distracting the user earlier when they are focused on implementation details. This timing matters. Finding certain issues (e.g., issues with program performance 410, program behavioral accuracy 1116, program security 1124, or programming style 1118), and displaying potential fixes during program writing time, instead of following the present teachings, could be very distracting to the program writer.).
As per claim 7, this is the method claim to system claim 1. Therefore, it is rejected for the same reasons as above.
As per claim 8, this is the method claim to system claim 2. Therefore, it is rejected for the same reasons as above.
As per claim 9, this is the method claim to system claim 3. Therefore, it is rejected for the same reasons as above.
As per claim 10, this is the method claim to system claim 4. Therefore, it is rejected for the same reasons as above.
As per claim 11, this is the method claim to system claim 5. Therefore, it is rejected for the same reasons as above.
As per claim 12, this is the method claim to system claim 6. Therefore, it is rejected for the same reasons as above.
Claim 13 rejected, Wu teaches a non-transitory medium storing program code executable by at least one processing unit of a computing system to cause the computing system to (Wu, abstract and summary):
determine resource consumption data indicating resource consumption in an execution environment due to execution of a script in the execution environment (Wu, column 3, line 25-52, Event information may include additional context information associated with an event, such as event source, event type, or the like. Organizations may deploy various systems may be configured to monitor various types of events depending on needs of an industry or technology area. For example, information technology services may generate events in response to one or more conditions, such as, computers going offline, memory over-utilization, CPU over-utilization, storage quotas being met or exceeded, applications failing or otherwise becoming unavailable, networking problems (e.g., latency, excess traffic, unexpected lack of traffic, intrusion attempts, or the like), electrical problems (e.g., power outages, voltage fluctuations, or the like), customer service requests, or the like, or combination thereof. Events may be provided using one or more messages, emails, telephone calls, library function calls, application programming interface (API) calls, including, any signals provided to indicate that an event has occurred. One or more third party and/or external systems may be configured to generate event messages. Column 3, line 63, “configuration information” refers to information that may include rule based policies, pattern matching, scripts (e.g., computer readable instructions), or the like, that may be provided from various sources, including, configuration files, databases, user input, built-in defaults, or the like, or combination thereof. In some cases, configuration information may include or reference information stored in other systems or services, such as, configuration management databases, Lightweight Directory Access Protocol (LDAP) servers, name services, public key infrastructure services, or the like. Wu, column 4, line 48-55, one or more metrics may be determined for each prompt fragment based on its corresponding determined portion of the response. In some embodiments, the performance score for each prompt fragment may be determined based on the one or more metrics such that the one or more metrics include one or more of latency, correctness, resource consumption, cost, event type applicability, redundancy, or the like. Column 3, line 25-52.);
transmit a prompt to a text generation model, the prompt including the resource consumption data and the script(Wu, column 4, line 14-19, a prompt may be generated for a large language model (LLM) based on a prompt template and the one or more prompt fragments such that the one or more prompt fragments may be included in the prompt, and such that information associated with the one or more events is included in the prompt. Wu, column 3, line 13-25. Column 3, line 26-28, prompt.);
Wu does not explicitly teach
receive, from the text generation model and in response to the prompt, a response indicating one or more modifications to the script;
present the one or more modifications.
receive, from the text generation model and in response to the prompt, a response indicating one or more modifications to the script (Obando Chacon, US 20230385042, para [0085-0086], In some situations, an AI-based functionality 210 runs and suggests an edit of the source code or a project configuration change, or recommends a change to testing. In some embodiments, the response 214 requests 938 performance of the code improvement option, and implementing 816 the response includes invoking 804 the artificial intelligence functionality to perform program code analysis 806 on at least a portion of the particular program code, and the method includes at least one of the following based on a result 306 of the program code analysis: suggesting 940 an edit 404 to a source code 314 of the particular program code; suggesting 1034 a configuration 1036 change 1040; or recommending 918 a change 542 in testing 412 associated with the particular program code. Para [0098]. Para [0216], 410 computational performance of a program, e.g., processor cycles used, memory used, bandwidth used, latency, speed, power consumption, and so on. Para [0240] 526 AI-based functionality to proffer a performance 410 optimization; computational [0241] 528 performance 410 optimization, e.g., reduction in use of computational resources without sacrificing program features 1108 or failing tests 600 that were passed before the optimization. Para [0056-0057], improvement options 212 are always presented to the developer, as opposed to a system 202 proactively making changes 428 without asking the developer first. In other embodiments or configurations, some changes 428 may be made proactively based on a general consent, e.g., as an effect of a previously selected tool setting or an environment variable or a default tool setting. For example, automatic error correction and other forms of linting 532, comment analysis functionality 508, and documentation generation functionality 504 could be enabled with any changes approved in advance by default. Para [0108], Thus, suitably enhanced tools 310 may present a user 104 with opportunities for improvement in their code 130 at a time when their mindset is that of performing fixes, instead of distracting the user earlier when they are focused on implementation details. This timing matters. Finding certain issues (e.g., issues with program performance 410, program behavioral accuracy 1116, program security 1124, or programming style 1118), and displaying potential fixes during program writing time, instead of following the present teachings, could be very distracting to the program writer.); and
present the one or more modifications(Obando Chacon, para [0085-0086], an AI-based functionality 210 runs and suggests an edit to the source code, and the source code is edited accordingly, and the tool 210 recommends changing the testing to avoid regressing 948 that edit. In some embodiments, the response 214 requests 938 performance of the code improvement option, and implementing 816 the response includes invoking 804 the artificial intelligence functionality to perform program code analysis 806 on at least a portion of the particular program code, and the method includes: noting 942 an edit 404 to a source code 314 of the particular program code, the edit based at least in part on a result 306 of the program code analysis; and recommending 918 a change 542 in testing 412 associated with the particular program code, the change in testing configured to check for regression 948 of the edit. . Para [0098]. Para [0216], 410 computational performance of a program, e.g., processor cycles used, memory used, bandwidth used, latency, speed, power consumption, and so on. Para [0240] 526 AI-based functionality to proffer a performance 410 optimization; computational [0241] 528 performance 410 optimization, e.g., reduction in use of computational resources without sacrificing program features 1108 or failing tests 600 that were passed before the optimization. Para [0056-0057], improvement options 212 are always presented to the developer, as opposed to a system 202 proactively making changes 428 without asking the developer first. In other embodiments or configurations, some changes 428 may be made proactively based on a general consent, e.g., as an effect of a previously selected tool setting or an environment variable or a default tool setting. For example, automatic error correction and other forms of linting 532, comment analysis functionality 508, and documentation generation functionality 504 could be enabled with any changes approved in advance by default. Para [0108], Thus, suitably enhanced tools 310 may present a user 104 with opportunities for improvement in their code 130 at a time when their mindset is that of performing fixes, instead of distracting the user earlier when they are focused on implementation details. This timing matters. Finding certain issues (e.g., issues with program performance 410, program behavioral accuracy 1116, program security 1124, or programming style 1118), and displaying potential fixes during program writing time, instead of following the present teachings, could be very distracting to the program writer.).
It would have obvious to one having ordinary skill in the art before the effecting filing date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effecting filing date of the claimed invention would have been motivated to incorporate Obando Chacon into Wu to effectively employ artificial intelligence (Al) functionalities, recognizes developer intentions, and promotes efficient development efforts. The method promotes efficient software development by automatically recognizing a developer's intention to improve existing code and offering the developer Al-based analyses of the code.as suggested by Obando Chacon (See abstract and conclusion).
The Office notes that Obando Chacon also teaches
a script(Obando Chacon, fig. 1, application 122, program 132, code 130, and para [0023-0024], During a design stage 1106, the main features 1108 and the user experience 1110 of a program are designed. For instance, a design for a camera application 122 might specify that the application can run on a smartphone or a tablet with a graphical user interface that allows a user to remotely activate a camera by wireless communication and also allows the user to download pictures from the camera to the smartphone or tablet. This is merely one example application, from a universe of many possible application programs. During the design stage 1106, a developer's priorities focus on specifying the program's features 1108 and user experience 1110, without regard to any particular programming language, data structure, programming style, source code version control tool, and so on.)
Claim 14 is rejected for the reasons set forth hereinabove for claim 13, Wu and Obando Chacon teach a medium according to Claim 13, wherein the prompt comprises a system prompt and a user prompt, the at least one processing unit to execute the program code to cause the system to (Wu, column 3, line 13-25. Column 3, line 26-28, prompt.):
determine the system prompt from a plurality of system prompts based on the resource consumption data (Wu, column 3, line 13-25, “large language model,” or “LLM” refer to data structures, programs, or the like, that may be trained or designed to perform a variety of natural language processing tasks. Typically, LLMs may generate text responses in response to text based prompts. Often, LLMs may be considered to be neural networks that have been trained on large collections of natural language source documents. Accordingly, in some cases, LLMs may be trained to generate predictive responses based on provided prompts. LLM prompts may include context information, examples, or the like, that may enable LLMs to generate responses directed to specific queries or particular problems that go beyond conventional NLP.).
Claim 15 is rejected for the reasons set forth hereinabove for claim 14, Wu and Obando Chacon teach a medium according to Claim 14, the program code executable by at least one processing unit of a computing system to cause the computing system to (Wu, column 3, line 13-25):
determine context data based on the resource consumption data(Wu, column 3, line 13-25, “large language model,” or “LLM” refer to data structures, programs, or the like, that may be trained or designed to perform a variety of natural language processing tasks. Typically, LLMs may generate text responses in response to text based prompts. Often, LLMs may be considered to be neural networks that have been trained on large collections of natural language source documents. Accordingly, in some cases, LLMs may be trained to generate predictive responses based on provided prompts. LLM prompts may include context information, examples, or the like, that may enable LLMs to generate responses directed to specific queries or particular problems that go beyond conventional NLP. Wu, fig. 11 and column 27, line 17 to 45.); and
populate the system prompt with the context data(Wu, fig. 11 and column 27, line 46 to 60 , analysis engines may generate a prompt that includes the event information and provide the initial prompt to a management agent that may submit the prompt to a large language model. Fig. 12 and column 29, line 60 to column 30, line 7.).
Claim 16 is rejected for the reasons set forth hereinabove for claim 13, Wu and Obando Chacon teach a medium according to Claim 13, the program code executable by at least one processing unit of a computing system to cause the computing system to(Wu, column 3, line 13-25):
determine context data based on the resource consumption data(Wu, column 3, line 13-25, “large language model,” or “LLM” refer to data structures, programs, or the like, that may be trained or designed to perform a variety of natural language processing tasks. Typically, LLMs may generate text responses in response to text based prompts. Often, LLMs may be considered to be neural networks that have been trained on large collections of natural language source documents. Accordingly, in some cases, LLMs may be trained to generate predictive responses based on provided prompts. LLM prompts may include context information, examples, or the like, that may enable LLMs to generate responses directed to specific queries or particular problems that go beyond conventional NLP. Wu, fig. 11 and column 27, line 17 to 45.); and
populate the prompt with the context data(Wu, fig. 11 and column 27, line 46 to 60 , analysis engines may generate a prompt that includes the event information and provide the initial prompt to a management agent that may submit the prompt to a large language model. Fig. 12 and column 29, line 60 to column 30, line 7.). .
Claim 17 is rejected for the reasons set forth hereinabove for claim 13, Wu and Obando Chacon teach a medium according to Claim 13, wherein presentation of the one or more modifications comprises presentation of a description of a code error(Obando Chacon, para [0085-0086], an AI-based functionality 210 runs and suggests an edit to the source code, and the source code is edited accordingly, and the tool 210 recommends changing the testing to avoid regressing 948 that edit. In some embodiments, the response 214 requests 938 performance of the code improvement option, and implementing 816 the response includes invoking 804 the artificial intelligence functionality to perform program code analysis 806 on at least a portion of the particular program code, and the method includes: noting 942 an edit 404 to a source code 314 of the particular program code, the edit based at least in part on a result 306 of the program code analysis; and recommending 918 a change 542 in testing 412 associated with the particular program code, the change in testing configured to check for regression 948 of the edit. . Para [0098]. Para [0216], 410 computational performance of a program, e.g., processor cycles used, memory used, bandwidth used, latency, speed, power consumption, and so on. Para [0240] 526 AI-based functionality to proffer a performance 410 optimization; computational [0241] 528 performance 410 optimization, e.g., reduction in use of computational resources without sacrificing program features 1108 or failing tests 600 that were passed before the optimization. Para [0056-0057], improvement options 212 are always presented to the developer, as opposed to a system 202 proactively making changes 428 without asking the developer first. In other embodiments or configurations, some changes 428 may be made proactively based on a general consent, e.g., as an effect of a previously selected tool setting or an environment variable or a default tool setting. For example, automatic error correction and other forms of linting 532, comment analysis functionality 508, and documentation generation functionality 504 could be enabled with any changes approved in advance by default. Para [0108], Thus, suitably enhanced tools 310 may present a user 104 with opportunities for improvement in their code 130 at a time when their mindset is that of performing fixes, instead of distracting the user earlier when they are focused on implementation details. This timing matters. Finding certain issues (e.g., issues with program performance 410, program behavioral accuracy 1116, program security 1124, or programming style 1118), and displaying potential fixes during program writing time, instead of following the present teachings, could be very distracting to the program writer.).
Claim 18 is rejected for the reasons set forth hereinabove for claim 13, Wu and Obando Chacon teach a medium according to Claim 13, wherein presentation of the one or more modifications comprises presentation of a modified version of the script(Obando Chacon, para [0085-0086], an AI-based functionality 210 runs and suggests an edit to the source code, and the source code is edited accordingly, and the tool 210 recommends changing the testing to avoid regressing 948 that edit. In some embodiments, the response 214 requests 938 performance of the code improvement option, and implementing 816 the response includes invoking 804 the artificial intelligence functionality to perform program code analysis 806 on at least a portion of the particular program code, and the method includes: noting 942 an edit 404 to a source code 314 of the particular program code, the edit based at least in part on a result 306 of the program code analysis; and recommending 918 a change 542 in testing 412 associated with the particular program code, the change in testing configured to check for regression 948 of the edit. . Para [0098]. Para [0216], 410 computational performance of a program, e.g., processor cycles used, memory used, bandwidth used, latency, speed, power consumption, and so on. Para [0240] 526 AI-based functionality to proffer a performance 410 optimization; computational [0241] 528 performance 410 optimization, e.g., reduction in use of computational resources without sacrificing program features 1108 or failing tests 600 that were passed before the optimization. Para [0056-0057], improvement options 212 are always presented to the developer, as opposed to a system 202 proactively making changes 428 without asking the developer first. In other embodiments or configurations, some changes 428 may be made proactively based on a general consent, e.g., as an effect of a previously selected tool setting or an environment variable or a default tool setting. For example, automatic error correction and other forms of linting 532, comment analysis functionality 508, and documentation generation functionality 504 could be enabled with any changes approved in advance by default. Para [0108], Thus, suitably enhanced tools 310 may present a user 104 with opportunities for improvement in their code 130 at a time when their mindset is that of performing fixes, instead of distracting the user earlier when they are focused on implementation details. This timing matters. Finding certain issues (e.g., issues with program performance 410, program behavioral accuracy 1116, program security 1124, or programming style 1118), and displaying potential fixes during program writing time, instead of following the present teachings, could be very distracting to the program writer.).
Inquiry
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUY KHUONG THANH NGUYEN whose telephone number is (571)270-7139. The examiner can normally be reached Monday - Friday 0800-1630.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 5712723759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUY KHUONG T NGUYEN/ Primary Examiner, Art Unit 2199