Prosecution Insights
Last updated: April 19, 2026
Application No. 18/240,450

DISTRIBUTED ARTIFICIAL INTELLIGENCE SOFTWARE CODE OPTIMIZER

Non-Final OA §101§103
Filed
Aug 31, 2023
Examiner
ANYA, CHARLES E
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
BANK OF AMERICA CORPORATION
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
727 granted / 891 resolved
+26.6% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
932
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 891 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-20 are pending in this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The following guidelines illustrate the preferred layout for the specification of a utility application. These guidelines are suggested for the applicant’s use. Arrangement of the Specification As provided in 37 CFR 1.77(b), the specification of a utility application should include the following sections in order. Each of the lettered items should appear in upper case, without underlining or bold type, as a section heading. If no text follows the section heading, the phrase “Not Applicable” should follow the section heading: (a) TITLE OF THE INVENTION. (b) CROSS-REFERENCE TO RELATED APPLICATIONS. (c) STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT. (d) THE NAMES OF THE PARTIES TO A JOINT RESEARCH AGREEMENT. (e) INCORPORATION-BY-REFERENCE OF MATERIAL SUBMITTED ON A COMPACT DISC OR AS A TEXT FILE VIA THE OFFICE ELECTRONIC FILING SYSTEM (EFS-WEB). (f) STATEMENT REGARDING PRIOR DISCLOSURES BY THE INVENTOR OR A JOINT INVENTOR. (g) BACKGROUND OF THE INVENTION. (1) Field of the Invention. (2) Description of Related Art including information disclosed under 37 CFR 1.97 and 1.98. (h) BRIEF SUMMARY OF THE INVENTION. (i) BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S). (j) DETAILED DESCRIPTION OF THE INVENTION. (k) CLAIM OR CLAIMS (commencing on a separate sheet). (l) ABSTRACT OF THE DISCLOSURE (commencing on a separate sheet). (m) SEQUENCE LISTING. (See MPEP § 2422.03 and 37 CFR 1.821-1.825. A “Sequence Listing” is required on paper if the application discloses a nucleotide or amino acid sequence as defined in 37 CFR 1.821(a) and if the required “Sequence Listing” is not submitted as an electronic document either on compact disc or as a text file via the Office electronic filing system (EFS-Web.) In this application the Abstract filed on 08/31/23 is not on a separate sheet (it includes the title). The separate sheet should only contain the Abstract. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title. Claims 16-20 are directed to non-statutory subject matter. Claim 16 is directed to a “computer program product comprising: a non-transitory computer-readable medium”. The “computer program product comprising: a non-transitory computer-readable medium” is not disclosed in the specification or disclosed (paragraph 0032) to exclude non-statutory embodiment. For instance, the “computer program product comprising: a non-transitory computer-readable medium” is not disclosed or does not exclude carrier wave, “transmission medium and the like and is therefore directed to non-statutory subject matter. Claims 17-20 are rejected for the same reason as claim 16 above. Appropriate corrected is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9, 10, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 11,416,224 B1 issued to Kaitha in view of U.S. Pub. No. 2024/0231928 A1 to Clemons et al. As to claim 1, Kaitha teaches a system for intelligent software code optimization, the system comprising: a computing platform including a memory and one or more computing processor devices in communication with the memory; and an Artificial Intelligence (AI) engine (AI Module 104) stored in the memory, executable by at least one of the one or more computing processing devices and including an ML model, the ML model (i) trained to generate software code for a specific task (One or More Outputs 114) (“…In a variety of embodiments, the system 100 may be implemented with modules and sub-modules. For example, the system 100 may include an AI module 104. The AI module 104 can implement a machine learning model used to generate the source code 102 based on one or more inputs. How the machine learning model operates will be described further below. The inputs may include variables, parameters, conditional statements, software templates, or a combination thereof, that together represent the constraints of the software application being developed and the rules of the company or institution for developing the software application…In a variety of embodiments, the inputs may include application specific inputs 106, external inputs 108, revisions 110, and production inputs 112. Based on the inputs, the AI module 104 can generate one or more outputs 114. The outputs 114 can include at least the source code 102. The source code 102 can embody some or all of the inputs. For example, the source code 102 can embody or implement the rules or constraints of the software application being developed. How the AI module 104 generates the outputs 114 will be discussed further below…” Col. 3 Ln. 27-49) and (ii) configured to: receive first inputs (Application Specific Inputs 106) that define a plurality of currently available hardware resources for executing software code (memory allocation/hardware components) (“…The application specific inputs 106 refer to inputs representing variables, rules, constraints, or other requirements of the software application to be developed. For example, the application specific inputs 106 can include application requirements 122 and dependencies 124. The application requirements 122 refer to application specific features and can include the features and functions that the software application must implement or perform. For example and without limitation, the application requirements 122 can include the descriptions of the specific interfaces the software application must have; the memory allocation and usage requirements for the software application; the classes, methods, or functions the software application must implement or use; safety and security requirements of the software application; what programming languages the software should be implemented in; etc. The dependencies 124 refer to inputs representing what components, systems, or other software or hardware the software application depends on, or must interface with, to perform its functions. For example and without limitation, the dependencies 124 can include descriptions or parameters indicating the hardware components the software application must interface with; what other software applications or application programming interfaces (APIs) the software application must interface with; what external systems the software application must interface with; etc…” Col. 3 Ln. 50-67, Col. 4 Ln. 1-8), and in response to receiving the first inputs (Application Specific Inputs 106), generate at least one set of software codes (One or More Outputs 114/Source Code 102) for the specific task associated with the ML model (“…In a variety of embodiments, the inputs may include application specific inputs 106, external inputs 108, revisions 110, and production inputs 112. Based on the inputs, the AI module 104 can generate one or more outputs 114. The outputs 114 can include at least the source code 102. The source code 102 can embody some or all of the inputs. For example, the source code 102 can embody or implement the rules or constraints of the software application being developed. How the AI module 104 generates the outputs 114 will be discussed further below…” Col. 3 Ln. 27-49, Col. 6 Ln. 1-46), wherein each set of software codes are specific to at least one of the plurality of currently available hardware resources ((Application Specific Inputs 106/(memory allocation/hardware components). Kaitha does not explicitly teaches a plurality of Machine Learning (ML) models, and one of the least one set of software codes is a most optimal set of software codes for completing the specific task, wherein the most optimal is defined by at least one of (i) a rate at which the specific task is completed, and (ii) an accuracy of completing the specific task. Clemons teaches a plurality of Machine Learning (ML) models (“…Illustratively, the system memory 104 stores a machine learning (ML) model resource manager 130 (“resource manager 130”), an application 132 that includes dynamic ML models 134i (referred to herein collectively as “dynamic ML models 134” and individually as “a dynamic ML model 134”), and an operating system (OS) 140 on which the resource manager 130, the application 132, and the dynamic ML models 134 run. The application 132 can be any technically feasible type of application, such as an autonomous vehicle application, a mobile device application, or a virtual digital assistant, that uses the dynamic ML models 134. The OS 140 may be, e.g., Linux®, Microsoft Windows®, or macOS®. The resource manager 130 is a module that allocates computational resources to inferencing tasks performed using the dynamic ML models 134 during execution of the application 132, as discussed in greater detail below in conjunction with FIGS. 2-7...” paragraph 0029), and one of the least one set of software codes (application 132 that includes dynamic ML models 134i (referred to herein collectively as “dynamic ML models 134”) is a most optimal set of software codes for completing the specific task (tasks 301, 302, 303, and 304), wherein the most optimal is defined by at least one of (i) a rate at which the specific task is completed, and (ii) an accuracy of completing the specific task (For example, the nominal performances could be certain levels of accuracy when the tasks 301, 302, 303, and 304 are performed using corresponding dynamic ML models executing for the equal execution times. In some embodiments, the nominal performances can also be used as target performance requirements for the tasks 301, 302, 303, and 304) (“...FIG. 3 illustrates an exemplar allocation of computational resources to tasks performed using dynamic ML models, according to various embodiments. As shown, the computational resource is execution time for a time period, or quantum. Returning to the autonomous vehicle example, the time period could be the execution time available for processing a single frame of a captured video. Illustratively, the execution time for the time period has been allocated equally among a task 301 that is performed once using a dynamic ML model, a task 302 that is performed three times using another dynamic ML model, a task 303 that is performed once using yet another dynamic ML model, and a task 304 that is performed once using yet another dynamic ML model. Given the equal allocation of execution times, corresponding nominal performances are achieved when the tasks 301, 302, 303, and 304 are performed. For example, the nominal performances could be certain levels of accuracy when the tasks 301, 302, 303, and 304 are performed using corresponding dynamic ML models executing for the equal execution times. In some embodiments, the nominal performances can also be used as target performance requirements for the tasks 301, 302, 303, and 304…” paragraph 0039). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Kaitha with the teaching of Clemons because the teaching of Clemons would improve the system of Kaitha by providing a technique for achieving nominal or accurate performance in accordance with required performance (Clemons paragraph 0039). As to claim 9, Clemsons teaches the system of Claim 1, wherein each ML model is further configured to: receive second inputs that define one or more newly available hardware resources for executing software code, and in response to receiving the first and second inputs, generate at least one set of software codes for the specific task associated with the ML model, wherein each set of software codes are specific to at least one of the plurality of currently available hardware resources or at least one of the one or more newly available hardware resources and one of the least one set of software codes is a most optimal set of software codes for completing the specific task (Step 706)(“…The model resource allocator 210 in the resource manager 130 allocates computational resources to tasks performed using the dynamic ML models 134 during a time period based on the available computational resources for the time period and performance requirements associated with tasks, such as a target performance requirement, a minimum performance requirement, and/or a priority associated with each task. As discussed in greater detail below in conjunction with FIG. 7, in some embodiments, the model resource allocator 210 performs a greedy allocation in which the model resource allocator 210 first queries the look-up table 208 and allocates sufficient computational resources to meet the target performance requirements associated with each task performed using the dynamic ML models 134, as indicated by the look-up table 208. If there are available computational resources after such an allocation, the model resource allocator 210 increases the computational resources allocated to the tasks performed using the dynamic ML models 134 in task priority order. If there are still available computational resources after such an allocation, the model resource allocator 210 allocates the extra computational resources for a later time period…At step 704, the resource manager 130 determines whether, after the allocation of computational resources to the one or more tasks at step 702, there are available computational resources. The available computational resources are additional computational resources that can be allocated after the allocation of computational resources at step 702…If there are available computational resources, then at step 706, the resource manager 130 increases the computational resources allocated to the one or more tasks in task priority order. In the task priority order, higher priority tasks are allocated increased computational resources first. Increasing the allocation of computational resources can improve the performance of such tasks. The priority associated with a given task can generally depend on the task and the application that performs the task. In some embodiments, if two (or more) tasks have the same priority, then the allocation of computational resources is first increased for task(s) that are furthest from associated maximum accurac(ies)…At step 708, the resource manager 130 determines whether, after the allocation at step 706, there are still available computational resources. If there are still available computational resources, then at step 710, the resource manager 130 allocates the extra available computational resources for a later time period…” paragraphs 0036/0048-0050). As to claims 10 and 16, see the rejection of claim 1 above, expect for a non-transitory computer-readable medium. Kaitha teaches a non-transitory computer-readable medium (Non-Transitory Computer Readable Medium). As to claim 15, see the rejection of claim 9 above. Claims 2, 11 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 11,416,224 B1 issued to Kaitha in view of U.S. Pub. No. 20240231928 A1 to Clemons et al. as applied to claims 1, 10 and 16 above, and further in view U.S. Pub. No. 2018/0349114 A1 to Brown et al. As to claim 2, Kaitha as modified by Clemons teaches the system of Claim 1, however it is silent with reference to wherein each ML model is further configured to: receive the first inputs that define the plurality of currently available hardware resources for executing software code, wherein the plurality of currently available hardware resources include processing units comprising at least one of Central Processing Units (CPUs) and Graphics Processing Units (GPUs), and in response to receiving the first inputs, generate at least one set of software codes for the specific task associated with the ML model, wherein each set of software codes are specific to at least one of the processing units. Brown teaches wherein each ML model is further configured to: receive the first inputs that define the plurality of currently available hardware resources for executing software code, wherein the plurality of currently available hardware resources include processing units comprising at least one of Central Processing Units (CPUs) and Graphics Processing Units (GPUs) , and in response to receiving the first inputs, generate at least one set of software codes for the specific task associated with the ML model (The set of set of hardware and/or processing requirements 616 may include information indicating that the ML model 610 requires the use of a GPU and/or CPU and/or a cloud computing service for particular operations), wherein each set of software codes are specific to at least one of the processing units (“…As mentioned above in FIGS. 1-4, an existing ML model may be transformed into a transformed model that conforms to a particular model specification. As illustrated, the ML model 610 represents an existing model in a different format from the particular model specification. The ML model 610 includes ML primitives 612, ML data format 614 of ML data (e.g., ML datasets for purposes such as training, validation, testing, etc.), and a set of hardware and/or processing requirements 616. The ML primitives 612 may include primitives such as entities, properties, matrices, and matrices processing steps that are utilized by the ML model 610. The ML data format 614 may indicate that the ML data is in a format that is encoded serially in memory, or some other data format, etc. The set of set of hardware and/or processing requirements 616 may include information indicating that the ML model 610 requires the use of a GPU and/or CPU and/or a cloud computing service for particular operations…Optionally, the transformed ML model 630 includes one or more required software libraries 660 depending on the set of hardware and/or processing requirements 616 (e.g., GPU processing, cloud computing, etc.) of the ML model 610…” paragraphs 0062/0063). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Kaitha and Clemons with the teaching of Brown because the teaching of Brown would improve the system of Kaitha and Clemons by providing a CPU and/or GPU for processing computing instructions to specification/requirement. As to claims 11 and 17, see the rejection of claim 2 above. Claims 3, 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 11,416,224 B1 issued to Kaitha in view of U.S. Pub. No. 20240231928 A1 to Clemons et al. as applied to claims 1, 10 and 16 above, and further in view U.S. Pub. No. 2015/0106384 A1 to GO et al. As to claim 3, Kaitha as modified by Clemons teaches the system of Claim 1, however it is silent with reference to wherein each ML model is further configured to: receive second inputs that define at least one Software Development Kit (SDK) version associated with the specific task, and in response to receiving the first and second inputs, generate at least one set of software codes for the specific task associated with the ML model, wherein each set of software codes are specific to the at least one of the plurality of currently available hardware resources and one of the at least one SDK versions. GO teaches wherein each ML model is further configured to: receive second inputs that define at least one Software Development Kit (SDK) version associated with the specific task, and in response to receiving the first (one or more hardware requirements of an application) and second inputs (a version of a target software development kit of an application), generate at least one set of software codes for the specific task associated with the ML model (application), wherein each set of software codes are specific to the at least one of the plurality of currently available hardware resources and one of the at least one SDK versions (“…As shown at 101, evaluating an application for tablet compatibility may include evaluating, for example, objective requirements associated with the tablet compatibility of the application based on at least one tablet compatibility criterion. Tablet compatibility criteria, as mentioned above, may include any characteristics, qualities, factors and/or features that may help to determine whether an application is compatible with a tablet. For example, tablet compatibility criterion may be a measurement of whitespace in a screenshot taken of an application, a version of a target software development kit of an application, a resolution of a screenshot taken of an application, a maximum software development kit version of an application, one or more hardware requirements of an application, a supported screen size of an application, a high resolution resource folder in an application, and the like. Tablet compatibility criteria may include any combination of tablet compatibility criterion…As an example of an implementation of the disclosed subject matter, a tablet application classifier may evaluate applications (such as a weather application, a traffic application, and a photo editing application) for tablet compatibility and quality. The tablet application classifier may analyze each application based on tablet compatibility criteria such as the presence of tablet-specific screenshots, the SDK version to which the application is targeted, the maximum SDK version that the application can support, telephony requirements by the application, the target screen sizes of the application, the pixel density of the application's graphics, and the percent of whitespace in the tablet-specific screenshots. Based on these criteria, the tablet application classifier may assign a tablet compatibility score to each application…” paragraphs 0015/0032). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Kaitha and Clemons with the teaching of GO because the teaching of GO would improve the system of Kaitha and Clemons by providing a comprehensive package of tools, libraries, documentation, code samples, and APIs that helps developers build applications for a specific platform (like Android, iOS, Windows) or integrate complex features (like payments, ads, maps) quickly, essentially providing all necessary "building blocks" to create software without starting from scratch, thereby accelerating development and ensuring functionality. As to claims 12 and 18, see the rejection of claim 3 above. Claims 4, 5, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 11,416,224 B1 issued to Kaitha in view of U.S. Pub. No. 20240231928 A1 to Clemons et al. as applied to claims 1, 10 and 16 above, and further in view U.S. Pub. No. 2023/0359458 A1 to Evans et al. As to claim 4, Kaitha as modified by Clemons teaches the system of Claim 1, however it is silent with reference to wherein each ML model is further configured to: receive second inputs that define user-specified constraints for determining the most optimal set of software codes, wherein the user-specified constraints define at least one of (i) required time constraints for completing the specific task, (ii) required accuracy constraints for completing the specific task, and (iii) resource acquisition constraints for acquiring at least one of the currently available hardware resources. Evans teaches wherein each ML model is further configured to: receive second inputs that define user-specified constraints for determining the most optimal set of software codes (user), wherein the user-specified constraints define at least one of (i) required time constraints for completing the specific task, (ii) required accuracy constraints for completing the specific task, and (iii) resource acquisition constraints for acquiring at least one of the currently available hardware resources (“…A set of attributes may be determined for an identified ML model, including, but not limited to, one or more model parameters, an indication as to a training data source, a type inference, and/or a set of hyperparameters. As another example, a layer hyperparameter, an activation function, and/or other architectural features for the ML model may be programmatically defined (e.g., as may be specified via one or more lambda functions and/or code attributes in a source file). In some instances, it may be determined whether the ML model should be trained locally or using a remote training service and/or whether the ML model should be trained using a central processing unit (CPU) and/or graphics processing unit (GPU). Such attributes may be automatically determined based on the content of software code, based on a set of preferences (e.g., as may be pre-defined defaults by an IDE and/or specified by a user), or received as user input (e.g., in response to one or more prompts or via a user interface of an IDE), among other examples…The ML model may be trained according to a set of constraints, which may be determined from software code, according to a set of defaults, and/or according to one or more user preferences. Example constraints include, but are not limited to, a model latency, memory performance, task accuracy, an amount of CPU, TPU, and/or GPU hours, a set of hardware with which to perform training (e.g., a generation, model, or type of GPU), and/or a maximum training cost. In examples where it is determined that training is unable to satisfy the set of constraints, one or more candidate models (e.g., that satisfy a subset of constraints and/or are closest to satisfying a constraint) may be presented for selection by a user or, as another example, a candidate model may be automatically selected…” paragraphs 0019/0022/0024). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Kaitha and Clemons with the teaching of Evans because the teaching of Evans would improve the system of Kaitha and Clemons by providing an Integrated Development Environment (IDE) that allows user or client manage and control computing resource allocation. As to claim 5, Kaitha as modified by Clemons teaches the system of Claim 4, however it is silent with reference to wherein each ML model is further configured to: in response to receiving the first and second inputs, generate at least one set of software codes for the specific task associated with the ML model, wherein the at least one set of software codes is one specific set of software codes that is the most optimal set of software codes for completing the task, wherein the most optimal is further defined by the user-specified constraints. Evans teaches wherein each ML model is further configured to: in response to receiving the first and second inputs, generate at least one set of software codes for the specific task associated with the ML model (user), wherein the at least one set of software codes is one specific set of software codes that is the most optimal set of software codes for completing the task, wherein the most optimal is further defined by the user-specified constraints (“…A set of attributes may be determined for an identified ML model, including, but not limited to, one or more model parameters, an indication as to a training data source, a type inference, and/or a set of hyperparameters. As another example, a layer hyperparameter, an activation function, and/or other architectural features for the ML model may be programmatically defined (e.g., as may be specified via one or more lambda functions and/or code attributes in a source file). In some instances, it may be determined whether the ML model should be trained locally or using a remote training service and/or whether the ML model should be trained using a central processing unit (CPU) and/or graphics processing unit (GPU). Such attributes may be automatically determined based on the content of software code, based on a set of preferences (e.g., as may be pre-defined defaults by an IDE and/or specified by a user), or received as user input (e.g., in response to one or more prompts or via a user interface of an IDE), among other examples…The ML model may be trained according to a set of constraints, which may be determined from software code, according to a set of defaults, and/or according to one or more user preferences. Example constraints include, but are not limited to, a model latency, memory performance, task accuracy, an amount of CPU, TPU, and/or GPU hours, a set of hardware with which to perform training (e.g., a generation, model, or type of GPU), and/or a maximum training cost. In examples where it is determined that training is unable to satisfy the set of constraints, one or more candidate models (e.g., that satisfy a subset of constraints and/or are closest to satisfying a constraint) may be presented for selection by a user or, as another example, a candidate model may be automatically selected…” paragraphs 0019/0022/0024). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Kaitha and Clemons with the teaching of Evans because the teaching of Evans would improve the system of Kaitha and Clemons by providing an Integrated Development Environment (IDE) that allows user or client manage and control computing resource allocation. As to claims 13 and 19, see the rejection of claims 4 and 5 above. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 11,416,224 B1 issued to Kaitha in view of U.S. Pub. No. 20240231928 A1 to Clemons et al. as applied to claim 1 above, and further in view U.S. Pub. No. 2020/0313987 A1 to Hintermeister. As to claim 6, Kaitha teaches the system of Claim 1, wherein each ML model is further configured to: receive first inputs that define the plurality of currently available hardware resources for executing software code (memory allocation/hardware components) (“…The application specific inputs 106 refer to inputs representing variables, rules, constraints, or other requirements of the software application to be developed. For example, the application specific inputs 106 can include application requirements 122 and dependencies 124. The application requirements 122 refer to application specific features and can include the features and functions that the software application must implement or perform. For example and without limitation, the application requirements 122 can include the descriptions of the specific interfaces the software application must have; the memory allocation and usage requirements for the software application; the classes, methods, or functions the software application must implement or use; safety and security requirements of the software application; what programming languages the software should be implemented in; etc. The dependencies 124 refer to inputs representing what components, systems, or other software or hardware the software application depends on, or must interface with, to perform its functions. For example and without limitation, the dependencies 124 can include descriptions or parameters indicating the hardware components the software application must interface with; what other software applications or application programming interfaces (APIs) the software application must interface with; what external systems the software application must interface with; etc…” Col. 3 Ln. 50-67, Col. 4 Ln. 1-8). Hintermeister teaches wherein the currently available hardware resources are further defined as chosen from the group consisting of (i) commercially available hardware resources (“…Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses…” paragraph 0084), and (ii) inventoried hardware resources (“…Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter)…” paragraph 0067). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Kaitha and Clemons with the teaching of Hintermeister because the teaching of Hintermeister would improve the system of Kaitha and Clemons by providing a metering and pricing services to track as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources (Hintermeister paragraph 0084). Claims 7, 8, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 11,416,224 B1 issued to Kaitha in view of U.S. Pub. No. 20240231928 A1 to Clemons et al. as applied to claims 1, 10 and 16 above, and further in view U.S. Pat. No. 9,201,632 B2 issued to Somani et al. As to claim 7, Kaitha teaches the system of Claim 1, wherein each ML model is configured to: generate at least one set of software codes for the specific task associated with the ML model, wherein the at least one set of software codes is a plurality of sets of software codes (“…In a variety of embodiments, the inputs may include application specific inputs 106, external inputs 108, revisions 110, and production inputs 112. Based on the inputs, the AI module 104 can generate one or more outputs 114. The outputs 114 can include at least the source code 102. The source code 102 can embody some or all of the inputs. For example, the source code 102 can embody or implement the rules or constraints of the software application being developed. How the AI module 104 generates the outputs 114 will be discussed further below…” Col. 3 Ln. 27-49, Col. 6 Ln. 1-46). Somani teaches a regression test engine (regression tests) stored in the memory, executable by at least one of the one or more computing processor devices and configured to: perform regression testing on each of the plurality of sets of software codes to determine which one of the plurality of sets of software codes is the most optimal (“…Instead, presuming that each project build in the validation environment 410 was successful, the validation environment 410 may then conduct one or more tests on the resulting projects (operation 510), possibly including, but not limited to, unit tests, compatibility tests, performance tests, and regression tests, as discussed above. If any of the tests (or at least some of the more important tests) are unsuccessful (operation 512), the validation environment 410 may inform the user via the development environment 402 of the errors or faults encountered during its testing of the revised projects (operation 514). The user may then modify the source files A1.src, A2.src further in an attempt to eliminate the errors…If, instead, the tests executed in the validation environment 410, or some minimum set of the tests, were successful, the validation environment 410 may generate one or more new libraries (and possibly associated dependency metadata 418) (operation 516), and associate the new libraries with a new checkpoint. In the example of FIG. 5A, the validation environment 410 may combine the compiled code derived from the modified source files A1.src (MOD), A2.src (MOD) with the remaining binary code in the associated library A to generate a new version of the library A (labeled library A′ in FIG. 5A) associated with a new checkpoint CP2. The validation environment 410 may then publish the new library A′ (designated in FIG. 5A by way of vertical line segments associated with the new checkpoint CP2) to the binary repository 404 (operation 518). Also, the validation environment 410 may also publish new dependency metadata 418 and the new checkpoint CP2 to the dependency metadata system 408. In one example, the validation environment 410 may also inform the developer via the development environment 402 that the tests were successful, and that the new checkpoint CP2 was generated…” Col. 8 Ln. 38-58). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Kaitha and Clemons with the teaching of Somani because the teaching of Somani would improve the system of Kaitha and Clemons by providing a type of software testing that re-runs existing tests to ensure that recent code changes, like bug fixes or new features, haven't negatively impacted or broken previously working functionality, acting as a crucial quality check to maintain software stability and integrity. As to claim 8, Kaitha as modified by Clemons teaches the system of Claim 7, however it is silent with reference to wherein each ML model is further configured to: learn, over time, by receiving results of regression of regression testing including which one of the plurality of sets of software codes is the most optimal. Somani teaches wherein each ML model is further configured to: learn, over time, by receiving results of regression of regression testing including which one of the plurality of sets of software codes is the most optimal (“…Instead, presuming that each project build in the validation environment 410 was successful, the validation environment 410 may then conduct one or more tests on the resulting projects (operation 510), possibly including, but not limited to, unit tests, compatibility tests, performance tests, and regression tests, as discussed above. If any of the tests (or at least some of the more important tests) are unsuccessful (operation 512), the validation environment 410 may inform the user via the development environment 402 of the errors or faults encountered during its testing of the revised projects (operation 514). The user may then modify the source files A1.src, A2.src further in an attempt to eliminate the errors…If, instead, the tests executed in the validation environment 410, or some minimum set of the tests, were successful, the validation environment 410 may generate one or more new libraries (and possibly associated dependency metadata 418) (operation 516), and associate the new libraries with a new checkpoint. In the example of FIG. 5A, the validation environment 410 may combine the compiled code derived from the modified source files A1.src (MOD), A2.src (MOD) with the remaining binary code in the associated library A to generate a new version of the library A (labeled library A′ in FIG. 5A) associated with a new checkpoint CP2. The validation environment 410 may then publish the new library A′ (designated in FIG. 5A by way of vertical line segments associated with the new checkpoint CP2) to the binary repository 404 (operation 518). Also, the validation environment 410 may also publish new dependency metadata 418 and the new checkpoint CP2 to the dependency metadata system 408. In one example, the validation environment 410 may also inform the developer via the development environment 402 that the tests were successful, and that the new checkpoint CP2 was generated…” Col. 8 Ln. 38-58). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Kaitha and Clemons with the teaching of Somani because the teaching of Somani would improve the system of Kaitha and Clemons by providing a type of software testing that re-runs existing tests to ensure that recent code changes, like bug fixes or new features, haven't negatively impacted or broken previously working functionality, acting as a crucial quality check to maintain software stability and integrity. As to claims 14 and 20, see the rejection of claim 7 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Pub. No. 10,567,213 B1 issued to Braverman et al. and directed to systems and methods for receiving requests for executing specific tasks, analyzing current computational resources available for executing the tasks, and selecting code segments for executing the tasks. U.S. Pub. No. 2020/0387807 A1 to Stassen and directed to a method involves evaluating computer-readable instructions that utilize a trained machine learning model during execution of the computer-readable instructions on a resource-constrained device. U.S. Pub. No. 2019/0050751 A1 to Assem et al. and directed to a method for generating a potential configuration for hardware resources of the system and determining whether the potential configuration satisfies accuracy and time constraints for a selected machine learning model. U.S. Pub. No. 2011/0099532 A1 to Coldicott et al. and directed to a system for automatically creating a desired software application design. U.S. Pub. No. 12,197,895 B2 issued to Buesser et al. and directed to a method for facilitating a process to assist in code writing, such as by employing machine learning (ML). C.N. No. 12,106,081 A1 to Chai et al. and directed to development platform and associated software development kits. U.S. Pub. No. 2019/0114672 A1 to Chien et al. and directed to software development kits (SDK) including machine learning configured to train a model using machine learning. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES E ANYA/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Aug 31, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591471
KNOWLEDGE GRAPH REPRESENTATION OF CHANGES BETWEEN DIFFERENT VERSIONS OF APPLICATION PROGRAMMING INTERFACES
2y 5m to grant Granted Mar 31, 2026
Patent 12591455
PARAMETER-BASED ADAPTIVE SCHEDULING OF JOBS
2y 5m to grant Granted Mar 31, 2026
Patent 12585510
METHOD AND SYSTEM FOR AUTOMATED EVENT MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579014
METHOD AND A SYSTEM FOR PROCESSING USER EVENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572393
CONTAINER CROSS-CLUSTER CAPACITY SCALING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+33.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 891 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month