Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/30/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Examiner’s Note (EN)
The prior art rejections below cite particular paragraphs, columns, and/or line numbers in the references for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Bendert et al. (US20240338184A1) in view of Hulick et al. (US20250265346A1)
Regarding Claim 1, Bendert teaches A computer-implemented method for proactive dependency management in software development projects, comprising: initiating a dependency scan within a version-controlled repository to identify and list project dependencies at predetermined intervals ([0012-0016] "Disclosed in some examples are methods, systems, devices, and machine-readable mediums for a dependency tracking service that automatically identifies and tracks information about dependencies of a software component and provides one or more visualizations displaying that information. The system may identify the dependencies through automated metadata analysis of the software component, behavior analysis of the software component, or source code analysis of the software component. The system may track status of the software component by reference to one or more code management systems, vulnerability reporting systems, or the like. Dependency status may be determined based upon one or more of whether a new version is released, whether a vulnerability exists, whether the dependency is end-of-life, or the like. In some examples, the system may additionally provide recommendations regarding the dependencies. For example, a recommendation to switch to a different dependency based upon a trend in other software components to switch to the dependency, a better performance, or the like. The status may be tracked periodically, via an event-driven architecture, for example. The system may then provide one or more GUIs that provide a visualization of the dependencies and their statuses, among other information. … Dependency identification components may scan the software component periodically or based upon a specified event. Specified events include a request from a user (e.g., on-demand), a change in the software component, a notification of a change in a dependency, or the like. … In some examples, an event driven architecture may be used to automatically track status changes in dependencies. For example, a software code repository, a project website, a developer website, or a vulnerability reporting service or the like may send a notification to the dependency tracking service when a dependency status changes. Example status changes include new versions, newly identified vulnerabilities, and the like. In other examples, other methods of identifying changes in dependency status may be utilized, such as using a request/response model, scraping or otherwise interfacing with a software code repository, project website, developer website, vulnerability reporting service, analysis of release notes of a most recent version, and the like. For vulnerabilities, a vulnerability reporting service may notify the dependency tracking system of vulnerabilities. In these examples the dependency tracking system may subscribe to the dependency tracking system for a dependency and may be notified when a vulnerability is identified in that dependency. In other examples, the dependency tracking system may periodically poll the vulnerability system." and [0020] "FIG. 1 illustrates a software dependency tracking environment 100 of some examples of the present disclosure. Developer computing device 125 and one or more computing services may communicate over a network 135. Network 135 may be a local network, such as a Local Area Network (LAN), a network that spans a wider area such as a Wide Area Network (WAN), the Internet, an Intranet, or the like. Code repository service 130 may be a service that implements software source code control, such as storage and backup of software code; version management; access management; software defect management; software component code building services; and the like" [0030] "Dependency management service may also include a dependency tracker component 212. Dependency tracker component 212 may register for updates to each of the dependencies identified by the determiner identifier component with a code repository service (such as code repository service 130) or other service").
conducting a health analysis for the project dependencies listed by accessing and utilizing data from a plurality of vulnerabilities databases, the analysis uncovering current security vulnerabilities and assessing a frequency and recency of maintenance updates ([0016] "The dependency tracking system may create one or more health indicators for each dependency based upon the dependency status information. For example, a health indicator may be binary—that is the dependency is healthy or not, tri-nary (e.g., not healthy, healthy, or of intermediate health), or the like. In these examples, the health indicator may be determined based upon one or more specified rules. For example, a healthy indicator may be assigned to a dependency when the dependency version used by the software component is up-to-date with no known vulnerabilities. An unhealthy or intermediate indicator may be assigned when one or both of the dependency of the software component is not up-to-date, or has vulnerabilities. In some examples, multiple health indicators may be combined to produce an overall dependency health indicator for the software component. By rolling up the health of dependencies, one-by-one, the health of the software component's dependencies may be assessed and displayed." [0030] "In addition, the dependency tracker component 212 may determine whether any vulnerabilities exist to one or more of the versions of the dependency. For example, by registering for vulnerability or defect notices from a defect management system (such as defect management system 110). In some examples, rather than receive push notifications from the defect management service and/or the code repository service, the dependency tracker component 212 may periodically poll these services" [0023] "In addition, dependency management service 115 may interface with a vulnerability reporting component to determine one or more vulnerabilities that are reported for one or more of the dependencies tracked by the dependency management service 115. Defect management service 110 may be a service where developers, such as developer computing device 125 report and manage defects in their software components or may be a database where vulnerabilities are reported and/or stored, such as a Common Vulnerabilities and Exposures system" and [0012] "The system may track status of the software component by reference to one or more code management systems, vulnerability reporting systems, or the like. Dependency status may be determined based upon one or more of whether a new version is released, whether a vulnerability exists, whether the dependency is end-of-life, or the like")
applying a multifaceted criteria matrix to analyzed dependencies to isolate those that exhibit indicators of potential risk, including known security vulnerabilities and 216 may calculate one or more status indicators for one or more of the dependencies identified by the dependency determiner component 210 based upon the status information retrieved by the dependency tracker component 212. For example, based upon whether a new version of the dependency exists, whether a vulnerability has been reported for the version used by the software component, or the like. Status may be a binary status where one binary value means that the dependency is good and another value means the dependency is bad. Good may be indicated, for example, when the dependency is one or more of: up-to-date, has no known vulnerabilities, or the like. Bad may be indicated, for example, when the dependency is not one or more of: up-to-date, has no known vulnerabilities, or the like. Status indicators may be a score, based upon a specified formula that considers the above factors. In some examples, the status may include a testing status of a current version of the dependency. For example, if the software component was tested with the current version. In examples in which the status indicators are a score, points may be assigned based upon whether the dependency is up-to-date, has no known vulnerabilities, or the like. In some examples, for the version points, different points may be added (or subtracted) from the score based upon how close the utilized and/or tested version is to the current version. For example, if the software component has incorporated and/or tested version 1.6 of a dependency, but a version 2.1 is the latest version, fewer points may be given to the software component than if it had incorporated and/or tested version 1.9 of the dependency. In some examples, the status indicator may be converted to a percentage of the total points possible." [0039-0040] "A trend graph 310 shows a number of dependencies of the software component over time. A dependency status table 315 shows each dependency and a status score in the form of a percentage. Each dependency may be selectable and when selected the information about the dependency (current version; newest available version; vulnerabilities; and the like) may be displayed. In some examples, each dependency may have its box colored based upon the score. For example, a red box means a low score, a vulnerability, an updated version is available or the like; a green box may mean that the software component is using the current version, the dependency does not have a current vulnerability, or the like. As disclosed, vulnerabilities of a dependency may factor into the status score, color of the dependency as displayed in a dashboard, and the like. In some examples, the severity of the vulnerability may also factor into the score, color of the dependency, and the like. A high severity vulnerability may color the dependency red in the dashboard, whereas a medium severity may cause a yellow color, and a low severity may allow the box to stay green.")
aggregating a set of alternative dependencies for identified at-risk dependencies by leveraging public repository analysis to discern commonly adopted replacements, further evaluating the alternative dependencies for compatibility with a technology stack of the software development projects and adherence to security and maintenance standards ([0012] " In some examples, the system may additionally provide recommendations regarding the dependencies. For example, a recommendation to switch to a different dependency based upon a trend in other software components to switch to the dependency, a better performance, or the like." [0035-0036] "In some examples, the dependency status and recommendation component 216 may provide one or more recommendations for managing the dependencies. For example, by suggesting better dependencies. Trends in dependency usages across a plurality of software components managed by the dependency management service may be analyzed to find patterns where a number of software components utilizing a first dependency declines, and a number of software components utilizing a second dependency rises. In these examples, the system may recommend moving from the first to the second dependency. In some examples, in addition to simply matching the increase in usage of the first dependency to the decrease in usage of the second dependency, the function of the dependency may be determined (e.g., using manual input, or via machine-learning) and a second dependency may be recommended only if it is a similar function to the first dependency. In other examples, the second dependency may be recommended if it performs a same function and it has fewer known vulnerabilities, as determined by a defect management service 110. In some examples, the system may automatically test replacement dependencies. For example, by scanning the software component code and replacing calls to a first dependency with calls to a second dependency, e.g., by using an AI such as a large language model. The system may automatically run one or more tests of the modified software component to determine whether it works properly (e.g., whether it has additional defects over known defects with the first dependency), and whether the performance is better, worse, or unchanged. The system may report the results to the user" [0054] "In some examples, the machine-learning model may be used to scan the software component code to determine new frameworks or software components as dependencies (e.g., as a replacement for other dependencies). For example, the AI may replace a first dependency with a second dependency in the software component and may automatically test the software component to determine whether it works properly (e.g., whether it has additional defects over known defects with the first dependency), and whether the performance is better, worse, or unchanged. The system may report the results to the user. In these examples, the model may be a large language model (LLM) that may search for specific API calls of the first dependency and replace them with corresponding API calls of a second dependency. In these examples the input prompts may include the API definitions for one or more of the dependencies").
However, Bendert doesn’t appear to explicitly teach
evidence of neglect of updates and maintenance
Hulick teaches evidence of neglect of updates and maintenance ([0023] "the risk history module 110 analyzes data related to an amount of time that a software developer takes to remediate one or more vulnerabilities in an application. The time between identifying and remediating vulnerabilities can be indicative of the sophistication, team size, or responsiveness of a developer, as well as other organizational practices. The time to remediate vulnerabilities corresponds to risk, as a longer duration of time to remediate a vulnerability may suggest that an enterprise using that application is exposed to risk for a longer period of time, or the developer team lacks sufficient skills to timely remediate an issue. In some embodiments, the time between identifying and remediating vulnerabilities can be measured from a point at which a vulnerability became publicly-known, whereas in other embodiments, the time may be measured from a point of the earliest-known zero-day exploit (which would necessarily not be publicly-available at the time, but may later become known by a software developer or other researchers)" [0012-0013] also [0029] and [0021] "Furthermore, the risk history module 110 may obtain and analyze data that can include various data that is indicated with reference to versions of software applications. For a given software application, data can be obtained that describes a number of issues identified in each version of the software application, as well as other relevant qualitative and/or quantitative information about those issues. The data may be made available by developers or publishers of software, or can be published by researchers, such as software security investigators, governmental agencies, or other such persons or organizations. In some embodiments, the data is obtained from a system or systems that track Common Vulnerabilities and Exposures (CVE), which can include publicly-known information security vulnerabilities and exposures. One or more repositories (e.g., database 144 of repository servers 138A-138N, which is depicted and described with reference to FIG. 1 ) can store historical data regarding the identified vulnerabilities in each version of an application. Risk history module 110 may receive instructions to obtain data for a particular application and in response, search one or more network-accessible data stores for any relevant data for the application, including data for each version of the application").
Bendert and Hulick are analogous art because they are from the same field of endeavor in vulnerability assessment in code and dependencies. Before the effective filing date of the invention, it would have been obvious to a person of ordinary skill in the art, to combine Bendert and Hulick to incorporate Hulick’s extended criteria for evaluating dependencies including machine learning to predict future vulnerabilities. “According to one embodiment, techniques are provided for analyzing software applications for risk. Data is obtained indicating one or more components of an application and historical data relating to one or more previous versions of the application. The historical data is analyzed to identify one or more vulnerabilities present in the one or more previous versions of the application. A software bill of materials is generated for the application based on the one or more components of the application, wherein the software bill of materials includes risk metadata descriptive of the one or more vulnerabilities of the application. The risk metadata associated with the software bill of materials is analyzed to determine a risk score for the application.” (Hulick, [0010]).
Regarding claim 2, Bendert in view of Hulick teaches the method of claim 1. Hulick further teaches wherein the predetermined intervals are aligned with the initiation of each new build process within a continuous integration/continuous deployment (CI/CD) pipeline ([0027] "SBOM generation module 112 may automatically generate SBOMS by implementing mechanisms such as dependency scanners, build tools, continuous integration/continuous deployment systems, software composition analysis platforms, and the like. Dependency scanners may scan software dependencies and generate SBOMs automatically by cataloging the dependencies included in an application. Likewise, build tools can include plugins or commands that can produce SBOMs as part of the build process. Continuous integration/continuous deployment systems can be configured to automatically generate SBOMs during the build or deployment process, and software composition analysis platforms may include features for generating and managing SBOMs. In some embodiments, the SBOMs are developed and thus automatically generated by another computing device (e.g., a developer device rather than risk analysis server 102); in such cases, SBOM generation module 112 may obtain the SBOMs and modify the SBOMs to include the risk metadata in order to generate SBOMs that indicate risk in accordance with the embodiments presented herein").
Regarding claim 3, Bendert in view of Hulick teaches the method of claim 1. Bendert further teaches wherein a dependency is deemed problematic if it meets criteria including a presence of known vulnerabilities or lack of maintenance during a threshold time period ([0012] " Dependency status may be determined based upon one or more of whether a new version is released, whether a vulnerability exists, whether the dependency is end-of-life, or the like. In some examples, the system may additionally provide recommendations regarding the dependencies. For example, a recommendation to switch to a different dependency based upon a trend in other software components to switch to the dependency, a better performance, or the like. The status may be tracked periodically, via an event-driven architecture, for example. The system may then provide one or more GUIs that provide a visualization of the dependencies and their statuses, among other information" [0032] "For example, based upon whether a new version of the dependency exists, whether a vulnerability has been reported for the version used by the software component, or the like. Status may be a binary status where one binary value means that the dependency is good and another value means the dependency is bad. Good may be indicated, for example, when the dependency is one or more of: up-to-date, has no known vulnerabilities, or the like. Bad may be indicated, for example, when the dependency is not one or more of: up-to-date, has no known vulnerabilities, or the like. Status indicators may be a score, based upon a specified formula that considers the above factors. In some examples, the status may include a testing status of a current version of the dependency. For example, if the software component was tested with the current version. In examples in which the status indicators are a score, points may be assigned based upon whether the dependency is up-to-date, has no known vulnerabilities, or the like" and [0011] " In an example, a dependency may be labelled end-of-life because it is no longer supported. The support status of various dependencies, especially second level or greater dependencies (e.g., dependencies of dependencies) may be very difficult to find and track. Nevertheless, the status of direct and indirect dependencies may create defects or impact performance of a software component").
Hulick also teaches the threshold of time ([0023] "the risk history module 110 analyzes data related to an amount of time that a software developer takes to remediate one or more vulnerabilities in an application. The time between identifying and remediating vulnerabilities can be indicative of the sophistication, team size, or responsiveness of a developer, as well as other organizational practices. The time to remediate vulnerabilities corresponds to risk, as a longer duration of time to remediate a vulnerability may suggest that an enterprise using that application is exposed to risk for a longer period of time, or the developer team lacks sufficient skills to timely remediate an issue. In some embodiments, the time between identifying and remediating vulnerabilities can be measured from a point at which a vulnerability became publicly-known, whereas in other embodiments, the time may be measured from a point of the earliest-known zero-day exploit (which would necessarily not be publicly-available at the time, but may later become known by a software developer or other researchers").
Regarding claim 4, Bendert in view of Hulick teaches the method of claim 1. Hulick further teaches further comprising utilizing an artificial intelligence model to predict emerging vulnerabilities based on patterns found in external vulnerabilities databases ([0049] "Operation 508 involves analyzing risk metadata associated with the SBOM to determine a risk score. A rules-based or machine learning approach may analyze the historical data to compute a risk score of the application that is reflective of the application's risk in terms of the history of the application across previous versions of the applications. The risk score can then be compared to a threshold value at operation 510 to determine whether the application satisfies the risk score. If the application indeed satisfies the risk score, than one or more operations may automatically be performed at operation 512, including notifying a user, causing the application to be installed to one or more computing devices, and the like. Otherwise, these operations are not performed at operation 514, and optionally, other operations can instead be performed, such as notifying a user that the application represents an unacceptable level of risk or automatically uninstalling the application from one or more computing devices; see also [0025-0026, 0039-0040]
Regarding claim 5, Bendert in view of Hulick teaches the method of claim 1. Hulick further teaches further comprising generating a ranked list of alternative dependencies by prioritizing alternative dependencies based on a comparison of a frequency of maintenance updates and 114 may generate a risk score for each application by processing the risk metadata associated with the SBOM for each version of the application. The risk factors that are analyzed when computing a score can include a count of vulnerabilities, a severity of vulnerabilities, a time between identifying and remediating vulnerabilities, a number of developers assigned to remediating past or present vulnerabilities in the application, and an open-source or closed-source status of the application. A set of predefined rules can be implemented that is used to assign a value for each risk factor, and an overall risk score can be computed by combining the values of each risk factor, which can be independently weighted to increase or decrease the influence of each risk factor on the overall risk score. A risk score can be computed overall for an application by analyzing the risk data for all versions of the application, or each version can be separately scored for risk and can be statistically combined by e.g., averaging the scores for each version. In some embodiments, older versions may be weighted such that those versions' risk has a lesser amount of influence over the overall risk score, as a developer's more recent practices may be deemed to be more relevant.").
Bendert teaches community endorsements ([0024] "In some examples, in addition to showing dependency versions and status, the interfaces may provide information about dependency usage information such as how much a particular dependency is utilized in a component or across all (or a subset of all) components it tracks; trends showing which dependencies (e.g., globally across all software components managed by the dependency management service 115) are used most and whether that usage is increasing or decreasing, and the like" and [0032-0033] "Dependency status and recommendation component 216 may calculate one or more status indicators for one or more of the dependencies identified by the dependency determiner component 210 based upon the status information retrieved by the dependency tracker component 212. For example, based upon whether a new version of the dependency exists, whether a vulnerability has been reported for the version used by the software component, or the like. Status may be a binary status where one binary value means that the dependency is good and another value means the dependency is bad. Good may be indicated, for example, when the dependency is one or more of: up-to-date, has no known vulnerabilities, or the like. Bad may be indicated, for example, when the dependency is not one or more of: up-to-date, has no known vulnerabilities, or the like. Status indicators may be a score, based upon a specified formula that considers the above factors. In some examples, the status may include a testing status of a current version of the dependency. For example, if the software component was tested with the current version. In examples in which the status indicators are a score, points may be assigned based upon whether the dependency is up-to-date, has no known vulnerabilities, or the like. In some examples, for the version points, different points may be added (or subtracted) from the score based upon how close the utilized and/or tested version is to the current version. For example, if the software component has incorporated and/or tested version 1.6 of a dependency, but a version 2.1 is the latest version, fewer points may be given to the software component than if it had incorporated and/or tested version 1.9 of the dependency. In some examples, the status indicator may be converted to a percentage of the total points possible. Status indicators of all the dependencies of a given software component may be aggregated to form a score for the entire software component. Each individual status of each component may be weighted and combined to form a total score. In some examples, dependencies may be weighted differently based upon an importance of the dependency. The importance of the dependency may be based upon a degree of the dependency (e.g., a dependency of a dependency (2nd degree dependency) may be weighted lesser than a direct, 15 degree dependency), an importance level input by a user, a usage level of the dependency (e.g., how often the dependency is utilized by the software component as identified by automatically by the code or network scanning determiner identifier component), or the like." ).
Regarding claim 6, Bendert in view of Hulick teaches the method of claim 1. Bendert further teaches wherein the health analysis further includes assessing an impact of potential vulnerabilities on specific functionalities utilized by the software development project ([0010-0012] "For example, a first software component may depend on services provided by an independent second software component through an Application Programming Interface (API). The code of the second software component is not linked into the code of the first software component (either statically or dynamically), yet the second software component is a dependency of the first software component by virtue of the first software component's use of the API because changes in the second software component, and in particular, changes to the API, may impact functionality of the software component. Even if the dependencies are all known, managing those dependencies may be even more difficult. For example, unbeknownst to a particular user, a dependency may have a new version. New versions often include defect fixes, vulnerability fixes, and other desirable features. However, these new versions also pose a risk. For example, the new version may introduce additional defects, may change the API, or the like. These incompatibilities may be difficult for application developers to sort through. For example, the defect may be between two incompatible dependencies. In an example, a dependency may be labelled end-of-life because it is no longer supported. The support status of various dependencies, especially second level or greater dependencies (e.g., dependencies of dependencies) may be very difficult to find and track. Nevertheless, the status of direct and indirect dependencies may create defects or impact performance of a software component. Disclosed in some examples are methods, systems, devices, and machine-readable mediums for a dependency tracking service that automatically identifies and tracks information about dependencies of a software component and provides one or more visualizations displaying that information. The system may identify the dependencies through automated metadata analysis of the software component, behavior analysis of the software component, or source code analysis of the software component. The system may track status of the software component by reference to one or more code management systems, vulnerability reporting systems, or the like. Dependency status may be determined based upon one or more of whether a new version is released, whether a vulnerability exists, whether the dependency is end-of-life, or the like. In some examples, the system may additionally provide recommendations regarding the dependencies." [0018] "Other example functionality of the dependency tracking and visualization system includes alerting to dependency mismatches. For example, software components may be assigned various tiers based upon importance. For example, tier 1 may be most important, and tier 5 may be least important. In some examples, the system may determine that a dependency assigned an importance of tier 5 may be a dependency of a tier 1 component. In these examples, a user may be alerted to this issue. In other examples, the dependency may be automatically updated from a tier 5 to a tier 1 to reflect the importance of this dependency to a more important software component. In some examples, the system may provide a dependency score that determines how dependent one software component is to another. For example, based upon how many much the dependency is called by the software component, and the like." [0027] "Still yet another determiner identifier component may analyze the software component during execution. For example, by creating an execution environment and executing it. In still other examples, this determiner identifier component may be linked (e.g., temporarily) into the code of the software component and may monitor the interprocess and network traffic to determine other software components (which may be on other machines) that the software component is contacting. The determiner identifier component may determine the software component dependency by comparing the destination process information or network address with a specified list of dependencies that includes their process information and/or network addresses. The API version may be determined using a latest version, a version currently executing on the computing system (e.g., as determined by the name of the executable, a readme file in a directory of the application on a storage device, comparing a hash of the executable with a specified list of hash values for various versions of the dependency, or the like) or may be determined using one or more fields within the message sent to, or received by, the application over the network." [0035-0036] " In some examples, in addition to simply matching the increase in usage of the first dependency to the decrease in usage of the second dependency, the function of the dependency may be determined (e.g., using manual input, or via machine-learning) and a second dependency may be recommended only if it is a similar function to the first dependency. In other examples, the second dependency may be recommended if it performs a same function and it has fewer known vulnerabilities, as determined by a defect management service 110. In some examples, the system may automatically test replacement dependencies. For example, by scanning the software component code and replacing calls to a first dependency with calls to a second dependency, e.g., by using an AI such as a large language model. The system may automatically run one or more tests of the modified software component to determine whether it works properly (e.g., whether it has additional defects over known defects with the first dependency), and whether the performance is better, worse, or unchanged. The system may report the results to the user.").
Regarding claim 7, Bendert in view of Hulick teaches the method of claim 1. Bendert further teaches further comprising automatically implementing selected alternative dependencies in a production environment after validating compatibility, security, and operational integrity of implementing each of the alternative dependencies within a codebase for the software development projects ([0035-0036] " In some examples, in addition to simply matching the increase in usage of the first dependency to the decrease in usage of the second dependency, the function of the dependency may be determined (e.g., using manual input, or via machine-learning) and a second dependency may be recommended only if it is a similar function to the first dependency. In other examples, the second dependency may be recommended if it performs a same function and it has fewer known vulnerabilities, as determined by a defect management service 110. In some examples, the system may automatically test replacement dependencies. For example, by scanning the software component code and replacing calls to a first dependency with calls to a second dependency, e.g., by using an AI such as a large language model. The system may automatically run one or more tests of the modified software component to determine whether it works properly (e.g., whether it has additional defects over known defects with the first dependency), and whether the performance is better, worse, or unchanged. The system may report the results to the user").
Regarding claim 8, Bendert teaches a system for proactive dependency management in software development projects, comprising: a processor device; and a memory storing instructions that, when executed by the processor device, cause the system to: ([0058] “Machine (e.g., computer system) 600 may include one or more hardware processors, such as processor 602. Processor 602 may be a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof. Machine 600 may include a main memory 604 and a static memory 606, some or all of which may communicate with each other via an interlink (e.g., bus) 608”).
The remaining limitations are similar to claim 1 and are rejected under the same rationale.
Claims 9-14 are system claims reciting limitations similar to claims 2-7 respectively and are rejected under the same rationale.
Regarding claim 15, Bendert teaches a computer program product for dynamic dependency management in software development projects, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a hardware processor to cause the hardware processor to: ([0058] “Machine (e.g., computer system) 600 may include one or more hardware processors, such as processor 602. Processor 602 may be a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof. Machine 600 may include a main memory 604 and a static memory 606, some or all of which may communicate with each other via an interlink (e.g., bus) 608”).
The remaining limitations are similar to claim 1 and are rejected under the same rationale.
Claims 16-20 are medium claims reciting limitations similar to claims 2, 3, 5, 2 and 7 respectively and are rejected under the same rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
He et al. (Automating Dependency Updates in Practice: An Exploratory Study on GitHub Dependabot): discloses Dependabot’s compatibility score in additional to other functions of Dependabot like automatically incorporating updated dependencies.
Rombaut et al. (Leveraging the Crowd for Dependency Management: An Empirical Study on the Dependabot Compatibility Score): discloses the role of crowd sentiment in the calculation of the compatibility score.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR DARWISH whose telephone number is (571)272-4779. The examiner can normally be reached 7:30-5:30 M-Thurs.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emerson Puente can be reached on 571-272-3652. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.E.D./Examiner, Art Unit 2187
/LEWIS A BULLOCK JR/Supervisory Patent Examiner, Art Unit 2199