Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to the amendment filed on 08/14/2025.
Claims 3 and 18-19 have been amendment. Claims 1-20 have been examined.
Response to Arguments
Applicant’s arguments, see Remarks, filed 08/14/2025, with respect to 35 U.S.C 102 and 112(b) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Mishra et al. (US 20210216643).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4, 7-9, 11-17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Polleri et al. (Pub. No. US 20230267374), hereinafter Polleri in view of Mishra et al. (US 20210216643), hereinafter Mishra.
Regarding Claim 1;
Polleri discloses: a computer implemented method for generating a machine learning system from software packages,
wherein the software packages comprise package specific security vulnerability metadata, the method comprising:
receiving a specification of the machine learning system, (In [0066], At 206, the functionality includes receiving a third input of one or more performance requirements for the machine learning application.) wherein the specification comprises security vulnerability constraints; (In [104], The model composition engine 132 can receive several other user inputs including a second input identifying a data source for the machine learning architecture and a third input of one or more constraints (e.g., resources, location, security, or privacy) for the machine learning architecture. The model composition engine 132 can generate a plurality of code for the machine learning architecture based at least in part on the selected model, the second input identifying the data source, and the third input identifying the one or more constraints.)
and by comparing the package specific security vulnerability metadata to the security vulnerability constraints; generating the machine learning system [[using the selected software packages]]; (In [0039] The machine learning platform can analyze the identified data and the user provided desired prediction and performance characteristics to select one or more library components and associated API to generate a machine learning application. , see also Fig. 3, [0087]: select recommended algorithm, [0090]: check data characteristics- In [0370], a model can be initially selected from available models and the machine learning application can be constructed. A model can be made available after the machine learning application is generated.)
and training the machine learning system using the specification. (In [0094], Evaluate one or more QoS or KPI metrics to determine if the model meets the performance specifications. In [0095], At 318, the functionality includes training the machine learning model with predictions judged against QoS/KPIs, [00143]-[0154], [0283]-[0284], models trained using corresponding ML algorithms based on training data received, an automated composition/ML pipeline platform that receives a user specification and generates/trains ML systems from components).
Polleri does not explicitly disclose, however, Mishra in analogues art discloses selecting the software packages from a collection of software packages using the specification. (para. (selection by vulnerability data, Directly describes selecting/ranking alternative packages using vulnerability metadata (CVE/NVD) and exploitability scoring and the platform trains the assembled ML application based on data and the user’s specification (performance constraints). performance/functional, [0005], [0029]-[0036], ]0063]-[0066] identifying and evaluating exploitability of software vulnerabilities identifies a vulnerability and evaluates a level of exploitability of the vulnerability corresponding to a software package prior to installation of the software package on a data processing system based on data collected from a plurality of software vulnerability data sources. … determines a confidence level for each respective related alternative software package for resolving the level of exploitability… using natural language generation, generates insights based on determined confidence levels and rankings corresponding to calculated exploitability scores of the related alternative software packages)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Polleri with Mishra to include, selecting the software packages, as taught by Mishra because it would allow the risk metrics from the apparatus disclosed by Polleri to be used in apparatus disclosed by Mishar to predict software vulnerabilities. The motivation to combine is to identify and evaluate vulnerability and a level of exploitability of the vulnerability corresponding to a software package prior to installation of the software package on a data processing system based on data collected from a plurality of software vulnerability. (Abstract, Mishara).
Regarding Claim 2; Polleri in view of Mishra discloses: The computer implemented method of claim 1, wherein the method further comprises calculating a current security compliance metric of the machine learning system from the package specific security vulnerability metadata of the selected software packages. (In [0071] testing metrics, [0072] training metrics.)
Regarding Claim 4; Polleri in view of Mishara discloses: The computer implemented method of claim 2,
Polleri discloses:
wherein the method further comprises:
receiving a security patch for one of the software packages compliance metric; (in [0261], Due to the potential risks, issues, and implications of integrating external libraries and code bases into software projects, an organizations may include software architecture authorization system to analyze code integration requests, and to approve or deny such code integration requests based on one or more potential code integration issues, including license compliance or compatibility, security vulnerabilities, costs, further software dependencies, the recency and priority of the software project, the availability of security patches, and the existence of safer alternative libraries.)
and providing a signal responsive to a difference between the projected security compliance metric and the current security compliance metric being below a predetermined threshold. (In [0094], At 316, the functionality includes monitoring values on ongoing basis for QoS/KPI to validate model. In various embodiments, the monitoring engine can evaluate one or more QoS or KPI metrics to determine if the model meets the performance specifications. In various embodiments, the machine learning platform can inform the user of the monitored values and alert the user if the QoS/KPI metrics fall outside prescribed thresholds.)
Regarding Claim 7; Polleri in view of Mishra discloses: The computer implemented method of claim 2,
Polleri discloses:
wherein the method further comprises: receiving an updated software package, wherein the software packages comprise the updated software package; (in [0284], At 1110, the models trained using machine learning or artificial intelligence algorithms in step 1108 may be (optionally) revised (or tuned) based on additional relevant data received from one or more external data sources 1080.
integrating the updated software package into the machine learning system; (in [0284], updated open source software license terms, one or more software license compatibility matrices, updated security issue data (e.g., known security vulnerabilities, available security patches, etc.)
retraining the machine learning system after integration of the updated software package; (In [284], the machine learning may be tuned after updating an open source library.)
and recalculating the current security compliance metric of the machine learning system. (In [0371], The performance detector 1812 can evaluate various performance characteristics that can include classification accuracy, various model metrics, generic QoS metrics, or other non-machine learning model related KPIs. The adaptive pipelining composition service 1800 optimizes operation both offline and at run-time.)
Regarding Claim 8; Polleri in view Mishra discloses: The computer implemented method of claim 2,
wherein the software packages further comprise at least one machine learning metric, wherein the method further comprises constructing an objective function from the at least one machine learning metric and the current security compliance metric, (In [0078], The library components 168 can include metadata that identifies features and functions of each of the library components 168. The technique can determine the one or more library components 168 to select based at least in part on the identified problem received via the second input to achieve the performance metrics of the third input. One or more variables of each of the library components can be adjusted to customize the machine learning model to achieve a solution to the identified problem.)
wherein training the machine learning system comprises optimizing the objective function. (in [0080], the production functions can include at least one of load balancing, fail-over caching, security, test capability, audit function, scalability, predicted performance, training models, predicted power, maintenance, debug function, and reusability. Load balancing refers to the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing techniques can optimize the response time for each task, avoiding unevenly overloading compute nodes while other compute nodes are left idle.)
Regarding Claim 9; Polleri in Mishra discloses: The computer implemented method of claim 8,
wherein the objective function is a difference between the at least one machine learning metric and the current security compliance metric. (in [0080], Audit function can address the ability the machine learning application can be evaluated against internal controls.)
Regarding Claim 11; Polleri in view of Mishra discloses: The computer implemented method of claim 1, wherein the specification of the machine learning system further comprises a selection from the group consisting of: training data, input specification of the machine learning system, and output specification of the machine learning system. (In [0081], After the machine learning model has been generated it can use the training data to train the machine learning model to the desired performance parameters.)
Regarding Claim 12; Polleri in view of Mishra discloses: The computer implemented method of claim 1,
wherein selecting software packages using the specification (In [0045], Machine learning configuration and interaction with the model composition engine 132 allows for selection of various library components 168 (e.g., pipelines 136 or workflows, micro services routines 140, software modules 144, and infrastructure modules 148) to define implementation of the logic of training and inference to build machine learning applications 112. Different parameters, variables, scaling, settings, etc. for the library components 168 can be specified.)
and by comparing the package specific security vulnerability metadata to the security vulnerability constraints is a filtering process. (In [0284], external data sources 1080 may include updated open source software license terms, one or more software license compatibility matrices, updated security issue data (e.g., known security vulnerabilities, available security patches, etc.), software or computing infrastructure cost data, and the like. In [371], Fig. 18, The performance detector 1812 can evaluate various performance characteristics that can include classification accuracy, various model metrics, generic QoS metrics, or other non-machine learning model related KPIs.)
Regarding Claim 13; Polleri in view of Mishra discloses: The computer implemented method of claim 1,
wherein the security vulnerability constraints comprise a selection from the group consisting of: a severity of allowed security vulnerabilities and a number of vulnerabilities. (in [104], The model composition engine 132 can receive several other user inputs including a second input identifying a data source for the machine learning architecture and a third input of one or more constraints (e.g., resources, location, security, or privacy) for the machine learning architecture.)
Regarding Claim 14; Polleri in view of Mishra discloses: The computer implemented method of claim 1,
Pollerri discloses:
wherein the software packages comprise a selection from the group consisting of:
an artificial intelligence model, a predictor model, an estimator model, an executable file, an executable script, and a binary file. (in [0054]: tasks, threads …)
Regarding Claim 15; Claim 15 is substantially similar to Claim 1. Therefore, Claim 15 is rejected on the same grounds as Claim 1.
Regarding Claim 16; Claim 16 is substantially similar to Claim 1. Therefore, Claim 16 is rejected on the same grounds as Claim 1.
Regarding Claim 17; Claim 17 is substantially similar to Claim 2. Therefore, Claim 17 is rejected on the same grounds as Claim 2.
Regarding Claim 19; Claim 19 is substantially similar to Claim 4. Therefore, Claim 19 is rejected on the same grounds as Claim 4.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Polleri, in view of Mishra et al. (US 20210216643), hereinafter Mishra and in view of Chiarelli (Pub. No. US 2021/0312058), hereinafter, Chiarelli.
Regarding Claim 3; Polleri in view of Mishra discloses: The computer implemented method of claim 2,
Polleri Mishra does not explicitly disclose:
wherein the package specific security vulnerability metadata of the selected software packages comprises package specific security vulnerability scores and package specific penalty scores respectively corresponding to the selected software packages, and wherein the calculating the current security compliance metric of the machine learning system comprises: calculating a sum of the package specific security vulnerability scores, calculating a sum of the package specific penalty scores; and multiplying by the sum of the package specific security vulnerability scores by the sum of the package specific penalty scores.
However, Chiarelli discloses:
wherein the package specific security vulnerability metadata of the selected software packages comprises package specific security vulnerability scores and package specific penalty scores respectively corresponding to the selected software packages, and wherein the calculating the current security compliance metric of the machine learning system comprises: calculating a sum of the package specific security vulnerability scores, calculating a sum of the package specific penalty scores; and multiplying by the sum of the package specific security vulnerability scores by the sum of the package specific penalty scores. (In [0067], risk score is an aggregate of risk scores for vulnerabilities in a collection of vulnerabilities. In some embodiments, risk score generating engine 118 may determine the enterprise risk score by a weighted aggregate of risk scores for vulnerabilities in the collection of vulnerabilities. For example, weights may be associated with each vulnerability based on a business unit, and risk score generating engine 118 may determine the enterprise risk score by applying these weights to the risk scores for the vulnerabilities. (The Examiner interprets weighted aggregate as the sum of the penalty score).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Polleri and Mishra with Chiarelli to include, wherein the package specific security vulnerability metadata comprises a package specific security vulnerability score and a package specific penalty score for each selected software package, and wherein the current security compliance metric of the machine learning system is a sum of the package specific security vulnerability score for the selected packages, multiplied by the package specific penalty score summed for the selected software package, as taught by Chiarelli because it would allow the risk metrics from the apparatus disclosed by Polleri and Mishra to be used in apparatus disclosed by Chiarelli to determine aggregate vulnerability scoring. The motivation to combine is to reduce the reduce the baseline calculated risk score. (Chiarelli. [0007]).
Regarding Claim 18; Claim 18 is substantially similar to Claim 3. Therefore, Claim 18 is rejected on the same grounds as Claim 3.
Claims 5 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Polleri, in view of Mishra et al. (US 20210216643), hereinafter Mishra and in view of Jeffery (Pub. No. US 2021/0329033), hereinafter, Jeffery.
Regarding Claim 5; Polleri in view of Mishra in view of Jeffery discloses: The computer implemented method of claim 4,
Polleri in view of Mishra does not discloses:
wherein the signal causes a selection from the group consisting of: providing a warning signal, providing a notification, and blocking installation of the security patch
However, Jeffery discloses:
wherein the signal causes a selection from the group consisting of: providing a warning signal, providing a notification, and blocking installation of the security patch. (in [0052], see FIG. 4C. In [0049], a maturity value that deviates a predetermined amount from a certain threshold may trigger stored recommendation notification.)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Polleri and Mishra with Jeffery to include, wherein the signal causes a selection from the group consisting of: providing a warning signal, providing a notification, and blocking installation of the security patch, as taught by Jeffery because it would allow the threshold alert from the apparatus disclosed by Polleri and Mishra to be used in apparatus disclosed by Jeffery to provide a notification recommendation. The motivation to combine is to the cognitive machine learning system to determine recommendations based on the maturity values for the entity. (Jeffery [0048]).
Regarding Claim 20; Claim 20 is substantially similar to Claim 5. Therefore, Claim 20 is rejected on the same grounds as Claim 5.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Polleri, in view of in view of Mishra et al. (US 20210216643), hereinafter Mishrain view of Bulut et al. (Pub. No. US 2019/0129705), hereinafter, Bulut.
Regarding Claim 6; Polleri in view of Mishra discloses: The computer implemented method of claim 4,
Polleri in view of Mishra does not explicitly discloses:
wherein the method further comprises installing the security patch responsive to the difference between the projected security compliance metric and the current security compliance metric being above the predetermined threshold.
However, Bulut discloses:
wherein the method further comprises installing the security patch responsive to the difference between the projected security compliance metric and the current security compliance metric being above the predetermined threshold. (In [0054], At 916, it is determined whether the set of group risks is above a defined threshold. If no, method 900 returns to 902. If yes, method 900 proceeds to 918. At 918, one or more critical patches are determined by the system (e.g., by patch management component 102).)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Polleri and Mishra with Bulut to include, wherein the signal causes a selection from the group consisting of: providing a warning signal, providing a notification, and blocking installation of the security patch, as taught by Bulut because it allows to provide proper preventive measures against vulnerabilities. (Bulut. [0018]).
Claims 10 is rejected under 35 U.S.C. 103 as being unpatentable over Polleri, in view of Mishra et al. (US 20210216643), hereinafter Mishra in view Millar et al. (Pub. No. US 2024/0430273), hereinafter, Millar.
Regarding Claim 10; Polleri in view of Mishra discloses: The computer implemented method of claim 1,
Polleri in view of Mishra does not explicitly discloses:
wherein the method further comprises determining the package specific security vulnerability metadata using a security vulnerability scan module.
However, Millar discloses:
wherein the method further comprises determining the package specific security vulnerability metadata using a security vulnerability scan module. (In [0066], In some embodiments, the vulnerability detection configuration module 112 may configure vulnerability detection of the computer network security system using vulnerability detection configuration data 126 provided by the vulnerability data processing system 120. In [0068], vulnerability detection module 114 may be configured to scan a software application by accessing data from one or more computing devices of the computing resources executing the software application. In [0086], In some embodiments, the datastore 126 of the vulnerability data processing system 120 may comprise storage hardware (e.g., one or more hard drives). The storage hardware may store vulnerability data obtained by the vulnerability data processing system 120. In some embodiments, the datastore 126 may store information indicating anomalous vulnerability data. For example, the datastore 126 may store metadata for files including a label of whether the files were determined to include anomalous vulnerability data.)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify Polleri and Mishra with Millar to include, wherein the method further comprises determining the package specific security vulnerability metadata using a security vulnerability scan module, as taught by Millar because it allows a configurable vulnerability detection module and it allows the storing metadata which can connects vulnerability detection data with solution data. (Millar [0066 and 0077]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure (see PTO-form 892).
Weinstein et al. (US 11422799) - directed to organizing software packages based on identification of unique attributes,
Retna et al. (US 20200202268) – directed to training a machine learning model with the historical risk data and the historical compliance data to generate a structured semantic model, and receiving entity risk data identifying new and existing risks associated with an entity.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shewaye Gelagay whose telephone number is (571)272-4219. The examiner can normally be reached Monday to Friday 8 A.M. - 4 P.M.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy C. Johnson can be reached at (571) 272-2238. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHEWAYE GELAGAY/ Supervisory Patent Examiner, Art Unit 2436