Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is the initial Office action based on the application filed on April 10, 2024.
Claims 1-20 are presently pending in the application have been examined below, of which, claims 1, 15, and 20 are presented in independent form.
Allowable Subject Matter
Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim 19 is substantially similar to claim 9 and objected to under the same rationale.
Claims 10-12 are considered allowable by virtue of their dependence on the rewritten allowable independent claims 9.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-7, 13-16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US 2025/0123946 (hereinafter "Eberlein”) in view of US 2019/0317754 (hereinafter “Mosquera”).
In the following claim analysis, Applicant’s claim limitations are presented in bold text, the Examiner’s explanations, notes, and remarks are enclosed in square brackets; and emphasized portions are underlined.
As to claim 1, Eberlein discloses A data processing system (Eberlein, Fig. 7, ¶ 133, computer-implemented System 700 used to provide computational functionalities) comprising:
a processor (Eberlein, Fig. 7, ¶ 140, The Computer 702 includes a Processor 705), and
a memory storing executable instructions which, when executed by the processor, causes the processor, alone or in combination with other processors (Eberlein, Fig. 7, ¶ 142, The Computer 702 also includes a Memory 707 … Memory 707 can store any data consistent with the present disclosure), to implement:
a united data platform for extracting data from a software release pipeline for specific software (Eberlein, Fig. 6, ¶ ¶ 125-126, At 602, to create extracted data records, an extract filter is instructed to extract relevant data records from log messages of two runs of a software pipeline … the relevant data records include deployed software application versions and components, error messages, and status information of extracted tasks … the software pipeline includes deploy, test, and production);
a software change insights module to generate insights into changes to the specific software on a per build basis using the extracted data (Eberlein, Fig. 6, ¶ 128, a recommendation engine is instructed to execute a machine-learning model training with the diff records; ¶ 130, based on a later run of the software pipeline, determining that a failure causing the failure-indicator has been corrected; ¶ 131, a change in configuration or version of a software application associated with a correction is identified; ¶ 132, a failure-indicator-solution combination is generated … a recommended solution to the same failure-indicator is recommended; ¶ 108, From a build step [The Examiner notes that Eberlein teaches the extracted diff records and changed parameters are associated with an individual build execution of the software pipeline, thereby, generating software change insights can be on a per-build basis], changed parameters can be extracted).
Eberlein does not appear to explicitly disclose a deployment insights module to generate deployment insights using the extracted data; and a dashboard to organize the generated insights and intelligently route deployment of a build to upgrade the specific software based on the generated insights.
However, in an analogous art to the claimed invention in the field of software development, Mosquera teaches a deployment insights module to generate deployment insights (Mosquera, Fig. 5A, ¶ 61, a continuous deployment pipeline is created for deploying code from the repository to a production release) using the extracted data; (Mosquera, ¶ 56, An analytics platform may be used to search server logs to return the number of lines where an adverse event is found … A threshold number of acceptable adverse events may be provided … If the threshold number of adverse events is exceeded, then this may be considered a failure condition; Fig. 4, ¶ 58, The user interface screen shows fields for configuring the data sources for evaluating the canary and also the relevant performance metrics; ¶ 59, an improved and automated method of application creation may be provided); and a dashboard to organize the generated insights and intelligently route deployment of a build to upgrade the specific software based on the generated insights (Mosquera, ¶ 73, a recommendation system that uses the results of the machine learning to recommend one or more actions for a developer to perform at each stage of the software development cycle to most likely lead to a successful deployment and to reduce time to deployment in a production release; Fig. 9, ¶ 74, dashboard 900 with statistics and metrics collected by the continuous deployment platform and its integrations to provide actionable insights regarding the software development lifecycle. … a deployment may be marked as failed if the deployment fails prior to completion. … The seventh chart 907 shows the top manual judgement stages per application. Chart 907 may display a list of applications and associated manual judgment steps in the deployment process for the application. … The ninth chart 905 shows the amount of time spent in each stage, such as development, QA, staging, and canary, per application).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Eberlein with the teaching taught by Mosquera. The modification would be obvious because one of ordinary skill in the art would be motivated to improve deployment decision-making based on observed pipeline and runtime performance by combining the teachings taught by Eberlein and Mosquera.
As to claim 2, the rejection of claim 1 is incorporated. Eberlein as modified further discloses The data processing system of claim 1, wherein the united data platform extracts data from a code source data repository (Mosquera, ¶ 67, a computer code repository may be provided on the computer-readable medium that stores the computer code comprising the continuous deployment pipeline template. The continuous deployment pipeline template may comprise a sequence of stages that define a continuous deployment pipeline), a build data store (Eberlein, ¶ 116, The Extract filter 504 can be configured to extract certain parameters and log messages from various records 506—at a high-level—product configuration, build versions [from a build data store], log messages and monitoring data (for example, alerts) of test runs and of production systems) and a deployment data store in the software release pipeline (Eberlein, ¶ 119, The Pipeline runtime 516 can compute a difference between extracted records when compared with an earlier run of a pipeline (for example, using a Pipeline Run History 518 [from a deployment data store])). The motivation to combine the references is the same as set forth in the rejection of claim 1.
As to claim 3, the rejection of claim 1 is incorporated. Eberlein as modified further discloses The data processing system of claim 1, wherein the software change insights module identifies commits and pull requests (Mosquera, ¶ 53, the new code committed by the developer may be incorporated and launched into a production release; ¶ 74, The fourth chart 906 shows the number of open pull requests over time. … chart 906 may display the duration time of open pull requests) with each build and extracts code changes (Eberlein, ¶ 130, based on a later run of the software pipeline, determining that a failure causing the failure-indicator has been corrected; ¶ 131, a change in configuration or version of a software application associated with a correction is identified; ¶ 132, a failure-indicator-solution combination is generated … a recommended solution to the same failure-indicator is recommended; ¶ 108, From a build step [The Examiner notes that Eberlein teaches the extracted diff records and changed parameters are associated with individual build execution of the software pipeline, thereby, generating software change insights on a per-build basis], changed parameters can be extracted) for each pull request (Mosquera, ¶ 74, The fourth chart 906 shows the number of open pull requests over time). The motivation to combine the references is the same as set forth in the rejection of claim 1.
As to claim 6, the rejection of claim 1 is incorporated. Eberlein as modified further discloses The data processing system of claim 1, wherein the software change insights module comprises a pull request metrics engine to categorize a code change for each pull request in the software release pipeline (Mosquera, ¶ 74, An increase in the number of open pull requests may indicate code changes that are not being deployed fast enough and adding value to the organization. … chart 906 may display the duration time of open pull requests instead of or in addition to the number of open pull requests, which may be indicative of continuous deployment [another code change category] velocity). The motivation to combine the references is the same as set forth in the rejection of claim 1.
As to claim 7, the rejection of claim 6 is incorporated. Eberlein as modified further discloses The data processing system of claim 6, wherein the pull request metrics engine further categorizes a build approval for each build in the software release pipeline (Mosquera, ¶ 61, The continuous integration process typically occurs after a pull request has been submitted, reviewed and approved by a code reviewer, and then merged. After the merge, continuous integration performs automated build and testing of the code). The motivation to combine the references is the same as set forth in the rejection of claim 1.
As to claim 13, the rejection of claim 6 is incorporated. Eberlein as modified further discloses The data processing system of claim 1, further comprising an insights module to support the dashboard and to processing administrator queries for insights on a per build basis generated by the software change insights module and deployment insights module (Eberlein, ¶ 118, he Recommender 510 can query a ML model 514 with a set of parameters. The Recommender 510 can be queried by the Developer 502 for insights regarding the ML model 514 or data passed through the Extract filter 504 and Diff filter 508).
As to claim 14, the rejection of claim 6 is incorporated. Eberlein as modified further discloses The data processing system of claim 13, wherein the insights module accepts administrator queries in natural language (Eberlein, ¶ 50, the ML model can be queried to assess test logs for indicators, that production will likely fail if a particular configuration is deployed to production).
As to claim 15, the claim is essentially the same as claim 1 except is set forth the claimed invention as a method and is rejected with the same reasoning as applied hereinabove in claim 1.
As to claim 16, the rejection of claim 15 is incorporated and the claim is corresponding to system claim 2. Therefore, it is rejected under the same rational set forth in the rejection of claim 2.
As to claim 20, the claim is essentially the same as claim 1 except is set forth the claimed invention as a system and is rejected with the same reasoning as applied hereinabove in claim 1. Furthermore, Eberlein as modified teaches the additional claim limitation a database to provide the generated insights (Mosquera, ¶ ¶ 33, hierarchical databases, relational databases, post-relational databases, object databases, graph databases, flat files, spreadsheets, tables, trees, and any other kind of database, collection of data, or storage for a collection of data [to provide the generated insights]. The motivation to combine the references is the same as set forth in the rejection of claim 1.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over US 2025/0123946 (hereinafter "Eberlein”) in view of US 2019/0317754 (hereinafter “Mosquera”) and further in view of US 2025/0156161 (hereinafter “Cowan”).
As to claim 4, the rejection of claim 3 is incorporated. Eberlein as modified further discloses a prompt to summarize changes to the specific software based on the extracted code changes (Eberlein, ¶ 29, multiple data sets can be used to prepare a machine learning (ML) model. … The ML model can be trained with the parameters and failures as labeled data to provide a model, which can be queried [prompts] with a set of parameters to assess … the parameters can include changes to build configuration, changes to used software versions (both direct and indirect consumption), log messages from build and run, and/or values extracted from such log messages), but does not appear to explicitly disclose The data processing system of claim 3, wherein the software change insights module comprises a code summarization module to call a number of Large Language Models (LLMs) trained on programming code. However, in an analogous art to the claimed invention in the field of utilizing Large Language Models, Cowan teaches The data processing system of claim 3, wherein the software change insights module comprises a code summarization module to call a number of Large Language Models (LLMs) trained on programming code (Cowan, ¶ 42, the program source code may be used as model training input for large language models that are to be fine-tuned for generating program code).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Eberlein as modified with the teaching taught by Cowan. The modification would be obvious because one of ordinary skill in the art would be motivated to utilize a computer device to train a machine learning model, utilizing the subdivided portions of the set of code, to obtain a trained model. The computing device may recommend, using the trained machine learning model, optimization code to improve the processing time as the set of code is modified (Cowan, Abstract).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over US 2025/0123946 (hereinafter "Eberlein”) in view of US 2019/0317754 (hereinafter “Mosquera”) in view of US 2025/0156161 (hereinafter “Cowan”) and further in view of US 2020/0097845 (hereinafter “Shaikh”).
As to claim 5, the rejection of claim 4 is incorporated. Eberlein as modified does not appear to explicitly disclose The data processing system of claim 4, wherein the number of LLMs comprises multiple LLMs, each LLM being trained on a different programming language. However, in an analogous art to the claimed invention in the field of utilizing machine learning models, Shaikh teaches The data processing system of claim 4, wherein the number of LLMs comprises multiple LLMs, each LLM being trained on a different programming language (Shaikh, ¶ 28, a plurality of pre-trained machine learning models; a plurality of different source codes; semantics corresponding to each of the plurality of datasets, machine learning models, and source codes).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Eberlein as modified with the teaching taught by Shaikh. The modification would be obvious because one of ordinary skill in the art would be motivated to utilize the given input dataset for a particular task, and to find best matching machine learning models and source codes for performing various tasks on the given input dataset.in order to decrease user cost in terms of time and effort by providing all compatible and potentially useful source codes and pre-trained machine learning models (Shaikh, ¶ 5).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over US 2025/0123946 (hereinafter "Eberlein”) in view of US 2019/0317754 (hereinafter “Mosquera”) and further in view of US 2023/0161882 (hereinafter “Kumar”).
As to claim 8, the rejection of claim 1 is incorporated. Eberlein as modified does not appear to explicitly disclose The data processing system of claim 1, wherein the software change insights module comprises a build insights dashboard to present a summarization of changes being made to the specific software by the software release pipeline. However, in an analogous art to the claimed invention in the field of program analysis, Kumar teaches The data processing system of claim 1, wherein the software change insights module comprises a build insights dashboard to present a summarization of changes being made to the specific software by the software release pipeline (Kumar, ¶ 137, The dashboard can also include an element indicating a number of failed code builds that have occurred for the application).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teaching of Eberlein as modified with the teaching taught by Kumar. The modification would be obvious because one of ordinary skill in the art would be motivated to dynamically control whether a code build of an application finishes to completion or is terminated prior to completion and use a scanning tool to identify a threshold number of vulnerabilities for the application. (Kumar, ¶¶ 6 and 9).
Therefore, Eberlein as modified teaches The data processing system of claim 1, wherein the software change insights module comprises a build insights dashboard to present a summarization of changes being made to the specific software by the software release pipeline (Kumar, ¶ 137, The dashboard can also include an element indicating a number of failed code builds that have occurred for the application), as determined using a number of Large Language Models (LLMs) trained on programming code (Shaikh, ¶ 28, a plurality of pre-trained machine learning models; a plurality of different source codes; semantics corresponding to each of the plurality of datasets, machine learning models, and source codes)..
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over US 2025/0123946 (hereinafter "Eberlein”) in view of US 2019/0317754 (hereinafter “Mosquera”) in view of US 2025/0156161 (hereinafter “Cowan”) and further in view of US 2023/0161882 (hereinafter “Kumar”).
As to claim 17, the rejection of claim 15 is incorporated and the claim is corresponding to method claims 3, 4, 6, and 8. Therefore, it is rejected under the same rational set forth in the rejections of the method claims.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over US 2025/0123946 (hereinafter "Eberlein”) in view of US 2019/0317754 (hereinafter “Mosquera”) in view of US 2025/0156161 (hereinafter “Cowan”) in view of US 2023/0161882 (hereinafter “Kumar”) and further in view of US 2020/0097845 (hereinafter “Shaikh”).
As to claim 18, the rejection of claim 17 is incorporated and the claim is corresponding to method claim 5. Therefore, it is rejected under the same rational set forth in the rejection of the method claim.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 2025/0117212 teaches A system is provided for automated computer software release validation and management.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAXIN WU whose telephone number is (571) 270-7721. The examiner can normally be reached on M-F (7 am - 11:30 am; 1:30- 5 pm).
If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, Wei Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/DAXIN WU/
Primary Examiner, Art Unit 2191