Prosecution Insights
Last updated: April 19, 2026
Application No. 18/621,927

System and method for mitigating vulnerabilities associated with open-source software components in source code

Final Rejection §103
Filed
Mar 29, 2024
Examiner
MUNGUIA, DUILIO
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
BANK OF AMERICA CORPORATION
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
5 granted / 5 resolved
+42.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
25 currently pending
Career history
30
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
69.3%
+29.3% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
DETAILED NOTICE Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Final Office Action is in response to the amendments filed on 11/17/2025. In which, claims 1, 8, and 15 have been amended, no claims have been cancelled, and claims 1 – 20 remain pending in the application. Response to Amendment The amended filed 11/17/2025 has been entered. See response to amendments. Response to Arguments Remarks regarding rejections under 35 U.S.C § 103 filed 11/17/2025 Applicant’s amendment to Independent Claims 1, 8, and 15 and arguments are carefully considered and are persuasive. However, upon further consideration, arguments are moot in view of new found prior art. With respect to applicant’s argument to the remaining dependent claims 2-7, 9- 14, and 16 - 20 on pages 15 - 17 of the remark, the applicant is relying on the newly added amendments of the independent claims 1, 8, and 15. Please see examiner’s response above and the detail of the rejection below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 5 – 8, 12 – 15, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Boulton et al. (US-20200167476-A1 hereafter Boulton) in view of Brobov et al. (US-20230021226-A1 hereafter Brobov), in view of Sun et al. (US-20230021226-A1 hereafter Sun), in further view of Chiarelli et al. (US- US-20210312058-A1 hereafter Chiarelli). Regarding claim 1 Boulton discloses a system for mitigating vulnerabilities associated with open-source software components in source code (see Boulton par.003: “FIG. 1 is a schematic diagram showing an example communication system that analyzes OSS components of a software code, according to an implementation.”), comprising: a memory configured to store a list of open-source software components (see Boulton par.0013: “The OSS depository 124 represents an application, a set of applications, software, software modules, hardware, or any combination thereof that can be configured to store OSS components.”, further in par.[0063]); and a processor, operably coupled to the memory (see Boulton par.0047 “The computer 302 includes a processor 305. Although illustrated as a single processor 305 in FIG. 3…. The computer 302 includes a processor 305. Although illustrated as a single processor 305 in FIG. 3,”, par.0048: “The computer 302 also includes a memory 306 that holds data for the computer 302. Although illustrated as a single memory 306 in FIG. 3,”), and configured to: receive a request to determine whether source code comprises any open-source software components (see Boulton par.0007: “a software developer can submit software code to a software service platform that is operated by a software provider. The software code can be executed on the software service platform to provide software services to user devices”.”par.0021 “The example method 200 begins at 202, where a software code is scanned to determine whether the software code includes one or more OSS components.”). Examiner interprets that the software service platform receive a request from a software developer, wherein the request comprises information related to a repository where the source code is available (see Boulton par. “The software developer device 160 represents an application, a set of applications, software, software modules, hardware, or any combination thereof that can be configured to submit software code to the software service platform 120. The software code can also be executed on the software service platform 120 to provide software service to the client device 102.”). Examiner interpret that the repository where the source code is available is represented as the software developer device 160; scan the first portion of the source code, wherein scanning the first portion of the source code comprising identifying one or more code patterns associated with an opensource software component from among the list of open-source software components (see Boulton par.0024 “the software service platform can maintain a list of keywords that correspond to different OSS projects. The software service platform can identify an OSS component in the software code by matching the text strings the software code with these keywords. Example of the keywords can include words or a string of characters indicating one or more following characters of an OSS projects: network addresses, files paths, file names, package names, constants, logging statements, output notification… The software service platform can scan the software code from the beginning of the software code (forward scanning), from the end of the software code (backward scanning), or from both the beginning and the end of the software code (parallel scanning) to search for text strings that match the keywords.”). Examiner interpret the list of keywords that correspond to different open source software as patterns associated with opensource component; determine, based at least in part upon an identity of the open-source software component, a software version of the open-source software component (see Boulton par.0027: “factor can be an update duration. OSS is more likely to be vulnerable to attacks if it has been released in public for a long time without being updated. The update duration can be the duration from the date that the most recent version of the OSS component is committed to the present time, the date that the most recent patch for the OSS component is released to the present time, or a combination of both.”, par.0029: “the duration information, e.g., the release date, the patch date, and the first published date, of the OSS component are published by the OSS project that develops the OSS component. The duration information can be included in metadata, manifest, or other supplemental files that are included or associated with the OSS component. The software service platform can obtain the duration information.”). Examiner notes that the software service platform can obtain the software version of the open source software component by retrieving the duration information; determine, based at least in part upon the determined software version of the opensource software component, a temporal gap factor associated with the open-source software component (see Boulton par.0027 “Another example factor can be an update duration.”, par.0028: “In one implementation, the update duration factor is calculated by dividing the update duration (e.g., in unit of days) by the existence duration (e.g., in units of days) of the OSS component.”), wherein the temporal gap factor indicates how far behind the software version of the open-source software component is with respect to a latest software version of the open-source software component (see Boulton par.0027 “OSS is more likely to be vulnerable to attacks if it has been released in public for a long time without being updated. The update duration can be the duration from the date that the most recent version of the OSS component is committed to the present time, the date that the most recent patch for the OSS component is released to the present time, or a combination of both.”); determine that the assigned temporal gap score is more than the first threshold score (see Boulton par.0033 “At 206, the software service platform can compare the security score of the OSS component with a threshold to determine whether the OSS component meets the security policy. In one example, a higher security score indicates a higher security risk.”). Examiner interpret that the temporal gap score is more than the first threshold score when a higher security score is assigned; and in response to determining that the temporal gap score is more than the first threshold score (see Boulton par.0034 “if the software service platform determines that at least one OSS component included in the software code does not meet the security policy, the software service platform can prevent the software code from being complied.”, par.0034: “In response, the software service platform can send a notification to the software development device. The notification can indicate the OSS components that fail to meet the security policy.”): identify a most recent version of the open-source software component that is associated with a less than a threshold number of security vulnerabilities (see Boulton par.0036 “For some OSS components, the OSS security database can also include information of replacement OSS components that have similar functionality. The replacement OSS components can have better security scores than the corresponding OSS component. If the software service platform determines that an OSS component fails to meet a security policy, the software service platform can query the OSS security database to find one or more replacement OSS components, and include the information of replacement OSS components in the notification to the software development device.”). Examiner interpret that the software platform is able to identify replacement OSS components that have similar functionality. The replacement OSS (most recent version) components can have better security scores than the corresponding OSS component that has less security vulnerabilities compare with the examine opensource component threshold; and implement the identified most recent version of the open-source software component in the source code. (See Boulton par.0037 “The software service platform can also select the replacement OSS components based on the assessed security score, e.g., by picking the OSS components have the best score as a replacement OSS component.”, par.0034: “If the software service platform…find that the OSS components in the software code meet the security policy. The software service platform can proceed to compile the software code.”). Boulton appear to be silence on however Brobov teaches determine that a first portion of the source code has not been scanned for a past threshold duration (see Brobov Par.0025: “The scanning tool manager 112 may also store data identifying the portion of the software under test 108 that the selected scanning tool scanned.”, par.0037: “The scanning tool manager 112 may coordinate scanning of different portions that the scanning tools 114 previously scanned.”, [0031] “…the same scanning tool analyzing the same portion of the software under test 108 at different points in time. For example, a scanning tool may analyze a portion of the software under test 108 on Monday. The scanning tool may identify three instances of the same vulnerability in the portion of the software under test 108. The same scanning tool may analyze the same portion of the software under test 108 on Wednesday.”, further in [0051]).Examiner interprets that the scanning tool is able to scan portion of the software under test and determine based on the previous scanned timestamp (threshold duration, e.g. Monday to Wednesday) portion that haven’t been scanned before; in response to determining that the first portion of the source code has not been scanned for the past threshold duration, obtain the first portion of the source code from the repository (see Brobov par.0025: “The scanning tool manager 112 may store a timestamp indicating the date and time of the scanning. The scanning tool manager 112 may also store data identifying the portion of the software under test 108 that the selected scanning tool scanned.” par.0037 “In the case of the scanning tools 114 scanning an unscanned portion of the software under test 108, the scanning tool output analyzer 122 may analyze the outputs of the scanning tools 114” , [0031] “…the same scanning tool analyzing the same portion of the software under test 108 at different points in time. For example, a scanning tool may analyze a portion of the software under test 108 on Monday. The scanning tool may identify three instances of the same vulnerability in the portion of the software under test 108. The same scanning tool may analyze the same portion of the software under test 108 on Wednesday.”, further in [0051]). Examiner interpret the scanning tools 114 can scan the software under test and obtain a portion that haven’t been scanned based on the timestamp and during the time between scans. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined Boulton teaching “The software security analyzer 122 represents an application, a set of applications, software, software modules, hardware, or any combination thereof that can be configured to analyze software code for security risks. In some implementations, the software security analyzer 122 can identify one or more OSS components in the software code. For each OSS component, the software security analyzer 122 can determine a secure score and determine whether the OSS component meets a security policy by comparing the secure score with a configured threshold. If at least one OSS component does not meet the security policy, the software service platform can prevent the software code from being compiled.” (see Boulton par.0012) with Brobov teaching “The system may track the scanning tools' detections of issues and the remediations performed by users. The system may create and/or update stateful synthetic issues that can include correlating refined issue results across different scanning tools.”, (see Brobov par.0010). Boulton in view of Brobov do not explicitly teach however Sun teaches and determining the temporal gap factor comprises: determine a numerical value that indicates a version gap between the determined software version of the opensource software component and the latest software version of the open-source software component (see Sun Col.5 lines 14-24: “The age score is based on the age of the version. The rating module 230 requests, using the library identifier and the version identifier, a release date for the version of the library. In some example embodiments, the request includes the library identifier, and the response includes release dates for multiple (e.g., all) versions of the library. Based on the release date of the version, the rating module 230 generates the age score and stores the age score in the version library rating table 340. In some example embodiments, the equation below is used, and the age score is limited to the range [0, 100],”); assign a first confidence score to the determined numerical value, where the first confidence score indicates a number of version changes between the determined software version of the open-source software component and the latest software version of the open-source software component (see Sun Col.5 lines 28-42: “A library release version's vulnerability score reflects the security situation of that release in terms of security issues affecting that version. The higher a library release version's vulnerability score is, the safer that release is to use. As in generating the history score, the rating module 230 requests a list of all CVEs affecting a library. For each CVE, the CVSS score and a list of versions affected by the CVE are accessed. Based on the CVEs affecting each version and the CVSS scores for those CVEs, a vulnerability score is generated for each release version. In some example embodiments, the equations below are used, with the vulnerability score being limited to the range [0, 100], and the sum is taken over the CVEs that affect the library release version.”); determine a time duration between a release date of the determined software version of the open-source software component and the latest software version of the open-source software component (see Sun Col.4 lines 19-39: “The history score is a score generated based on known vulnerabilities for the library and how long those vulnerabilities remain unsolved. In some example embodiments, the rating module 230 requests from the library server 130 a list of common vulnerabilities and exposures (CVE) identifiers for the library (e.g., by providing a unique identifier of the library to the library server 130). The library server 130 responds with the requested list, which is received by the authorization server 110. For each CVE identifier, the rating module 230 accesses a publication time of the CVE, a common vulnerability scoring system (CVSS) score of the CVE, and a release time of the first version of the library affected by the CVE. Based on some or all of the accessed data, the rating module 230 generates a CVE score for each CVE and totals the CVE scores for all of the CVEs identified in the list. The history score is generated based on the totaled CVE scores. In some example embodiments, the equations below are used, with the history score being limited to the range [0, 100] and the sum being taken over the CVEs that affect the library.”); assign a second confidence score to the determined time span, wherein the second confidence score indicates a span of the determined time duration (see Sun Col.4 lines 29-39: “a common vulnerability scoring system (CVSS) score of the CVE, and a release time of the first version of the library affected by the CVE. Based on some or all of the accessed data, the rating module 230 generates a CVE score for each CVE and totals the CVE scores for all of the CVEs identified in the list. Based on some or all of the accessed data, the rating module 230 generates a CVE score for each CVE and totals the CVE scores for all of the CVEs identified in the list. The history score is generated based on the totaled CVE scores. In some example embodiments, the equations below are used, with the history score being limited to the range [0, 100] and the sum being taken over the CVEs that affect the library.”); and assign, based at least in part upon the first confidence score and the second confidence score, a temporal gap score to the temporal gap factor, wherein the temporal gap score indicates whether a threat of implementing the open-source software component into the source code is more than a first threshold score (see Sun Col.7 lines 13-31: “the rating module 230 generates a rating for the requested software library based on the first score and the second score. For example, the equations discussed with regards to FIGS. 3-4 may be used to determine a rating for the requested software library based on the history score and popularity scores of the row 330A and the age score, vulnerability score, and dependency score of the row 360C. Based on the rating, the authorization module 240 approves the request and sends, via the communication module 210, the approval of the request. As an example, the authorization module 240 retrieves the rating for the requested version of the first open source library from the version library rating table 340 via the storage module 250. The rating is compared to a predetermined threshold (e.g., a predetermined rating of 75) and, based on a result of the comparison (e.g., the rating for the requested version of the first open source library being equal to or greater than the predetermined threshold).”); It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined Boulton in view of Brobov teaching described above with Sun teaching “systems are directed to generating and using open source library ratings. An open source library rating is generated for an open source library based on dependencies of the library, vulnerabilities of the library, an age of the library, a popularity of the library, a history of the library, or any suitable combination thereof. The rating of a specific version of a library may be generated based on a base score for all versions of the library and a version score for the specific version of the library.”, (see Sun Col.1 lines: 54-62). Boulton in view of Brobov, and Sun do not explicitly teach however Chiarelli teaches determine, as the temporal gap factor, one of (i) the determined numerical value or (ii) the determined time duration, that is assigned with a higher confidence score (see Chiarelli par.0035: “The CVSS scores are generally available in two versions. Version 2 (CVSS V2), referred to herein as the pre-revision version, includes a base metric group (comprising an access vector, an access complexity, an authentication, a confidentiality impact, an integrity impact, and an availability impact), a temporal metric group (comprising an exploitability, a remediation level, and a report confidence), and an environmental metric group (comprising a collateral damage potential, a target distribution, a confidentiality requirement, an integrity requirement, and an availability requirement).”, par.0037: “The severity of the cyberattack may determine a multiplying factor for the base score to determine the temporal score. In another embodiment, if the vulnerability has been utilized in a cyberattack, then the base score may be decreased. For example, if the vulnerability is deployed and has not been exploited, then a lower risk may be associated with the vulnerability. Accordingly, the base score may be decreased to determine the temporal score. For example, the temporal score may be determined as: TemporalScore=roundTo1Decimal(BaseScore*Exploitability*RemediationLevel*ReportConfidence)”, par.0043: “Machine learning engine 114 may learn to convert the temporal score based on the pre-revision version of the scoring system to the updated temporal score based on a post-revision version of the scoring system. For example, for many vulnerabilities, both the temporal score based on the pre-revision version of the scoring system and the updated temporal score based on a post-revision version of the scoring system are available. Accordingly, such data may be utilized as labeled training data for the machine learning model.”); It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined Boulton in view of Brobov, and Sun teaching described above with Chiarelli teaching “mitigation factor determination engine 116 may determine if developers and/or users may undo the mitigation. Also, for example, mitigation factor determination engine 116 may determine if there is a workaround to the mitigation being rolled out, if the mitigation is in place, and/or a level of confidence that the mitigation will remain in place. Accordingly, if the applying of the mitigation is reversible, then this may indicate a low level of mitigation, and the enforcement measure may be associated with a lower numeric score. Also, for example, if the applying of the mitigation is not reversible, then this may indicate a higher level of mitigation. Accordingly, the enforcement measure may be associated with a higher numeric score.”, (see Chiarelli par.0059). Regarding claim 8 is a method claim that recites similar limitations as the method claim 1 and is being rejected based on the same rational as claim 1. Regarding claim 15 is a computer-readable claim that recites similar limitations as the method claim 1 and is being rejected based on the same rational as claim 1. Regarding claim 5 Boulton in view of Brobov, Sun, and Chiarelli disclose the system of claim 1, Boulton further discloses wherein implementing the identified most recent version of the open-source software component in the source code is in response to receiving an indication that the identified most recent version of the open-source software component is approved. (See Boulton par.0036: “The replacement OSS components can have better security scores than the corresponding OSS component. In some implementations, if the software service platform determines that an OSS component fails to meet a security policy, the software service platform can query the OSS security database to find one or more replacement OSS components, and include the information of replacement OSS components in the notification to the software development device.” par.0037: “The software service platform can also select the replacement OSS components based on the assessed security score, e.g., by picking the OSS components have the best score as a replacement OSS component.”). Examiner construed that the software service platform select (approve) the replacement OOS component that have the best score (the most recent version). Regarding claim 12 is a method claim that recites similar limitations as the method claim 5 and is being rejected based on the same rational as claim 5. Regarding claim 19 is a computer-readable claim that recites similar limitations as the method claim 5 and is being rejected based on the same rational as claim 5. Regarding claim 6 Boulton in view of Brobov, Sun, and Chiarelli disclose the system of claim 1, Boulton further discloses wherein scanning the first portion of the source code further comprises detecting at least one clue associated with the open-source software component (see Boulton par.0024 “the software service platform can maintain a list of keywords (Clue) that correspond to different OSS projects. The software service platform can identify an OSS component in the software code by matching the text strings the software code with these keywords.”), wherein the at least one clue comprises a file name, a folder name, or a comment within the first portion of the source code. (See Boulton par.0024 “Example of the keywords can include words or a string of characters indicating one or more following characters of an OSS projects: network addresses, files paths, file names”). Regarding claim 13 is a method claim that recites similar limitations as the method claim 6 and is being rejected based on the same rational as claim 6. Regarding claim 20 is a computer-readable claim that recites similar limitations as the method claim 6 and is being rejected based on the same rational as claim 6. Regarding claim 7 Boulton in view of Brobov, Sun, and Chiarelli disclose the system of claim 1, Brobov further discloses wherein determining that the first portion of the source code has not been scanned for the past threshold duration is in response to (see Brobov par.0025: “The scanning tool manager 112 may store a timestamp indicating the date and time of the scanning. The scanning tool manager 112 may also store data identifying the portion of the software under test 108 that the selected scanning tool scanned.”, par.0037: “The scanning tool manager 112 may coordinate scanning of different portions that the scanning tools 114 previously scanned.”). Examiner interprets that the scanning tool is able to scan portion of the software under test and determine based on the previous scanned timestamp (threshold duration) portion that haven’t been scanned before.: accessing a set of historical scans associated with the open-source software components (see Brobov par.0017: ”The scanning tools 114 may identify open source components in the software and generate an inventory of the open source components…The scanning tools 114 may also identify whether the open source components have vulnerabilities and whether the software calls those portions of the open source components that have those vulnerabilities.”, par.0033: “The scanning tool output analyzer 122 may store data identifying the synthetic issues and any corresponding remediation status in the synthetic issue table 120. The synthetic issue table 120 may store data.”, par. 0037: “scanning tools 114 scanning a previously scanned portion of the software under test, the scanning tool output analyzer 122 may update previously identified synthetic issues of the synthetic issue table 120.”) Examiner interpret the scanning tools 114 is able to access synthetic issue table (historical scans) related with open source issues, wherein each of the set of historical scans is associated with a timestamp when a respective opensource software component was scanned (see Brobov par.0033 “The scanning tool output analyzer 122 may store data identifying the synthetic issues and any corresponding remediation status in the synthetic issue table 120. The synthetic issue table 120 may store data related to each synthetic issue. Some of this data may include an identification of the corresponding portion of the software undertest 108, a timestamp of when a scanning tool identified the issue, a remediation timestamp for the issue output by a scanning tool,”); and determining, based at least on the set of historical scans, that the first portion of the source code corresponds to a first open-source software component that has not been scanned for the past threshold duration. (see Brobov par.0031:“ scanning tool analyzing the same portion of the software under test 108 at different points in time. For example, a scanning tool may analyze a portion of the software under test 108 on Monday. The scanning tool may identify three instances of the same vulnerability in the portion of the software under test 108. The same scanning tool may analyze the same portion of the software under test 108 on Wednesday.”, par.0037: “The scanning tool manager 112 may coordinate scanning of the same portions that the scanning tools 114 previously scanned. The scanning tool manager 112 may coordinate scanning of different portions that the scanning tools 114 previously scanned. In the case of the scanning tools 114 scanning an unscanned portion of the software under test 108, the scanning tool output analyzer 122 may analyze the outputs of the scanning tools 114 in a manner similar to that described above.” Examiner interprets that the based on historical scan (e.g. Monday and Wednesday) the scanning tool is able to scan an unscanned portion that’s has not been scanned before. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined Boulton in view of Brobov, Sun, and Chiarelli teaching of claim 1 with Brobov teaching The system may track the scanning tools' detections of issues and the remediations performed by users. The system may create and/or update stateful synthetic issues that can include correlating refined issue results across different scanning tools.”, (see Brobov par.0010). Regarding claim 14 is a method claim that recites similar limitations as the method claim 7 and is being rejected based on the same rational as claim 7. Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Boulton et al. (US-20200167476-A1 hereafter Boulton) in view of Brobov et al. (US-20230021226-A1 hereafter Brobov), in view of Sun et al. (US-20230021226-A1 hereafter Sun), in view of Chiarelli et al. (US- US-20210312058-A1 hereafter Chiarelli), in further view of Cuka. et al. (US-20210192065-A1 hereafter Cuka). Regarding claim 2 Boulton in view of Brobov, Sun, and Chiarelli disclose the system of claim 1, Boulton in view of Brobov, Sun, and Chiarelli do not explicitly teach however Cuka teaches wherein the processor is further configured to: determine, based at least in part upon the identity of the open-source software component, a permission factor associated with the open-source software component; (see Cuka par.0057 “the process flow includes receiving, via the distributed network, information associated with one or more applications stored on each of the one or more hardware devices. In some embodiments, the system may be configured to receive information associated with the one or more applications using an open source code discovery engine on the one or more hardware devices. The open source code discovery engine may be configured to initiate a source code scan on each application stored on the hardware devices to identify the underlying sets of instructions, declarations, functions, loops, and other statements, which act as instructions for the application on how to operate. By scanning the source code, the system may be configured to receive information such as bill of materials, origins of the source code, licenses in effect, indications of any licensing conflicts, file inventory, identified files, dependencies, code matches, files pending identification, source code matches pending identification, and/ or the like.”). Examiner construe licenses in effect and licensing conflict as the permission factors associated opensource software component; determine, based at least in part upon the determined permission factor, a restriction for distributing the open-source software component to a third-party software application (see Cuka par.0069 “when determining whether at least the portion of the one or more applications stored on the first database is eligible for being accessed by the one or more hardware devices, the system may be configured to identify whether a subset of FOSS code that meets or satisfies the one or more open source code rules. For example, if the information associated with the FOSS code indicates an existence of an incoming or outgoing license that matches the license information specified in the open source code rules,”). Examiner interpret the restriction factor as the open source code rules in regards with the license (permission factor) by hardware device (third-party software application); and in response to determining the restriction, assign a permission score to the permission factor (see Cuka par.0068 “a level of interaction between FOSS and a proprietary software on a static and dynamic link level, approval information associated with the applications, approval information associated with applications that are similar or related to the applications stored on the hardware devices, analyzing the application to determine whether any exposure issues with the underlying source code has been documented and understood, determining whether the FOSS code is designed for a specific use case, and if so, whether the incoming and/or outgoing licensing terms reflect this purpose, one or more source code metrics such as maintainability index indicating a maintainability of the code, cyclomatic complexity indicating a structural complexity of the code, depth of inheritance indicating a number of different classes that inherit from one another, class coupling measuring the coupling to unique classes through parameters,”). Examiner interprets the code metric maintainability index as the score to the permission factor, wherein the permission score indicates that a threat of implementing the open-source software component in the source code is more than a second threshold score. (See Cuka par.0069 “if the maintainability index of the FOSS code is equal to or below the predetermined threshold maintainability index of the open source code rules, the conditions specified by the open source code rules are considered to be met or satisfied.”). Examiner interpret that the score of the maintainability index (permission score) is more than the predetermined threshold the source code the conditions of the open source rules would consider not satisfied (more than a threshold). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined Boulton in view of Brobov, Sun, and Chiarelli teaching of claim 1 with Cuka teaching “The goal with the FOSS governance process is to ensure that any software (proprietary, third party, or FOSS) that is being used within the technology environment has been audited, reviewed and approved and that the entity has a plan to fulfill the license obligations resulting from using the various software components integrated in the environment. This type of governance and compliance due diligence is often tracked and executed by scanning the source code of each application individually within the technology environment, determining whether each scanned application complies with a set of open source code rules, and approving/disapproving the use of the application based on determining whether the scanned application complies with the set of open source code rules.”, (see Cuka par.0055). Regarding claim 9 is a method claim that recites similar limitations as the method claim 2 and is being rejected based on the same rational as claim 2. Regarding claim 16 is a computer-readable claim that recites similar limitations as the method claim 2 and is being rejected based on the same rational as claim 2. Claims 3, 4, 10, 11, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Boulton et al. (US-20200167476-A1 hereafter Boulton) in view of Brobov et al. (US-20230021226-A1 hereafter Brobov), in view of Sun et al. (US-20230021226-A1 hereafter Sun), in view of Chiarelli et al. (US- US-20210312058-A1 hereafter Chiarelli), in view of view of Cuka. et al. (US-20210192065-A1 hereafter Cuka), in further view of Nagaraja et al. (US-20220222353-A1 hereafter Nagaraja). Regarding claim 3 Boulton in view of Brobov, Sun, Chiarelli, and Cuka disclose the system of claim 2, Boulton further disclose wherein the processor is further configured to: access a vulnerability database that comprises the list of open-source software components (see Boulton par.0036: “the software service platform can maintain an OSS security databased. The OSS security database can include a list of OSS projects, components of each OSS projects, or any combinations thereof. The OSS security database can also include information, e.g., duration information, CVE scores, environmental information, or other information that can be used to calculate the security score.”), wherein each of the open-source software components is associated with a respective set of security vulnerabilities (see Boulton par.0026 “For some OSS components, the OSS security database can also include information of replacement OSS The CVE system provides a reference-method for publicly known information-security vulnerabilities and exposures for different products. Accordingly, software developed by the OSS project can have a CVE score that indicates the level of vulnerability of the software.”); compare the open-source software component with each of the open-source software components in the vulnerability database (see Boulton par.0026: “the security assessment can be determined based on several factors. One example factor can be a Common Vulnerabilities and Exposures (CVE) score. The CVE system provides a reference-method for publicly known information-security vulnerabilities and exposures for different products. Accordingly, software developed by the OSS project can have a CVE score that indicates the level of vulnerability of the software. In some cases, the CVE score for the particular OSS component can be used.”). Examiner construed that the open source software component is compare with the common vulnerabilities and exposures; determine that the open-source software component is associated with a first set of security vulnerabilities (see Boulton par.0026 “the security assessment can be determined based on several factors. One example factor can be a Common Vulnerabilities and Exposures (CVE) score. The CVE system provides a reference-method for publicly known information-security vulnerabilities and exposures for different products. Accordingly, software developed by the OSS project can have a CVE score that indicates the level of vulnerability of the software.”, par.0034: “if the software service platform determines that at least one OSS component included in the software code does not meet the security policy, the software service platform can prevent the software code from being complied.”) Examiner construe that the open source component is associated with a first set of vulnerabilities (CVE) that information will be indicate with the level of vulnerability in the software; determine, based at least in the identity of the open-source software component and the first set of security vulnerabilities, a vulnerability factor associated with the open-source software component (see Boulton par.0026 “One example factor can be a Common Vulnerabilities and Exposures (CVE) score.”), wherein the vulnerability factor indicates the first set of security vulnerabilities (see Boulton par.0026 “the CVE score for an OSS project can be calculated by obtaining an average of the CVE scores of software developed by the OSS project. The CVE score is published by the CVE system. In some implementations, the software service platform can query a server to obtain the CVE score of an OSS project, an OSS component, or a combination thereof.”); assign, based at least in part upon the determined vulnerability factor, a vulnerability score to the vulnerability factor (see Boulton par.0026 “the CVE score for an OSS project can be calculated by obtaining an average of the CVE scores of software developed by the OSS project.”), determine a combined score based at least in part upon the temporal gap score, the permission score, and the vulnerability score (see Boulton par.0032 “the software service platform can determine the security score of the OSS component by combining these or other factors. For example, the security score can be determined by taking an average, a weight average, a minimum, a maximum, or any other statistical measures of mathematical operations of these factors.”, par.0026-0027: “One example factor can be a Common Vulnerabilities and Exposures (CVE) score… factor can be an update duration”, par.0031: “another example factor can be the complexity factor.”). Examiner interpret that the common vulnerabilities and exposure as the vulnerability score, update duration factor as the temporal factor and the complexity factor as the permission score; and determine that the combined score is more than a fourth threshold score. (See Boulton par.0033 “the software service platform can compare the security score of the OSS component with a threshold to determine whether the OSS component meets the security policy. In one example, a higher security score indicates a higher security risk. In this example, if the security score of the OSS component exceeds a configured threshold, the OSS component fails to meet the security policy.”). Examiner construed the configured threshold as the fourth threshold that if exceeds it the components fails to meet the criteria. Boulton in view of Brobov, Sun, Chiarelli, and Cuka appear to be silence on however Nagaraja teaches wherein the vulnerability score indicates that a threat of implementing the open-source software component in the source code is more than a third threshold score (see Nagaraja par.0050 “The remediation computer can receive information about the versions of the vulnerable library that are available. Scores can be assigned to the various risk factors. the scores may be industry standard scores such as those in the Common Vulnerability Scoring System (CVSS) 3.0.”, par.0070: “Library version risk can depend on a risk score and a change score. The risk score can quantify the potential risk of a library version. A library version with a high risk score can have many security and/or license risks that may make the candidate application more vulnerable.”, par.0075: “an intermediate risk score can be assigned for each risk identified. For example, if a high risk is determined, a high number can be added to the risk score… Risk levels may be provided by the license database server and the security risk database server. If risk levels are provided, the remediation computer may use additional criteria to determine additional scores. For example, the remediation computer might determine that some high-level risks only merit adding 7 to the risk score,”). Examiner construe that the vulnerability score can be assign to a library version with risk on a risk score. Implementing an open source code with a high risk score make the candidate application vulnerable. An high risk score is construe as the third threshold score that can be assign to a library version in base of CVSS 3.0; It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined Boulton in view of Brobov, Sun, Chiarelli, and Cuka teaching of claim 2 with Nagaraja teaching “The remediation score is then compared to a threshold. If the remediation score for the library version is above the threshold, it may indicate that the library versions all require significant changes that cannot be made automatically by the remediation computer.”, (see Nagaraja par.102). Regarding claim 10 is a method claim that recites similar limitations as the method claim 3 and is being rejected based on the same rational as claim 3. Regarding claim 17 is a computer-readable claim that recites similar limitations as the method claim 3 and is being rejected based on the same rational as claim 3. Regarding claim 4 Boulton in view of Brobov, Sun, Chiarelli, Cuka, and Nagaraja disclose the system of claim 3. Nagaraja further discloses wherein identifying the most recent version of the opensource software component that is associated with less than the threshold number of security vulnerabilities is further in response to determining that the combined score is more than the fourth threshold score. (See Nagaraja par.0070 “Library version risk can depend on a risk score and a change score. The risk score can quantify the potential risk of a library version. A library version with a high risk score can have many security and/or license risks that may make the candidate application more vulnerable. A library version with a low risk score can have few security and/or license risks that may make the candidate application more secure. The change score can quantify operational risks. A library version with a low change score may have fewer operational risks to negatively affect functionality of the application.”, par.0084: “after evaluating each library version, the list of library versions can be sorted by the generated scores. the list may be sorted by risk score to determine the library version or versions with the lowest risk. The list can also be sorted by change score to determine the library version or versions that present the lowest operational risk.” par.0102-0103: “Once all of the intermediate remediation scores have been calculated, they can be combined to create a remediation score. The remediation score is then compared to a threshold…. If the remediation score is below the threshold, the remediation computer may automatically make the code change to the new library version. The remediation computer may test each library version until it is able to automatically update the vulnerable library.”). It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have combined Boulton in view of Brobov, Sun, Chiarelli, Cuka and Nagaraja teaching of claim 3 with Nagaraja teaching “The remediation score returned for a particular test may be an intermediate remediation score that can be combined with intermediate remediation scores from other application tests to determine a remediation score. The remediation score may then be used to determine if the proposed library version can be automatically incorporated into the candidate application,”, (see Nagaraja par.0094). Regarding claim 11 is a method claim that recites similar limitations as the method claim 4 and is being rejected based on the same rational as claim 4. Regarding claim 18 is a computer-readable claim that recites similar limitations as the method claim 4 and is being rejected based on the same rational as claim 4. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Younberg et al. (US-20200134195-A1) The disclosed system may derive two category risk scores from OSA tools, including an open source vulnerability score and an open source license score. To assess the total risk of the application code, the disclosed system may determine an overall risk score by computing a weighted average of category risk scores. Also, methods for numerically assessing software risks, such as security risks, of application code of a software project based on software security analysis findings generated by multiple software security analysis tools that perform scan on the application code. These tools may span across different categories, such as SAST, DAST, IAST and Open Source Analysis (OSA) tools. Sass et al. (US-9436463-B2) receiving a characteristic of a source code entity to be checked; comparing the characteristic of the source code entity to be checked to characteristics stored in a repository; and subject to determining with at least a first probability that the characteristic of the source code entity to be checked is found in the repository, providing an indication of an open source library associated with the characteristic. The method may further comprise determining a characteristic of an open source code entity associated with an open source project; and storing the characteristic and an identifier of the open source code entity in the repository. Within the method, the identifier optionally comprises a name of the open source project. Within the method, the identifier optionally comprises an item selected from the group consisting of: a license associated with the open source project, a vulnerability, a bug, a quality issue, a trend report, a replacement. Olson et al. (US-20230177170-A1) the Software Composition Analysis (SCA) optionally employs a set of tools that provides a user visibility into the source code. The SCA optionally identifies third-party and open source components that have been integrated into the source code. For each of these components, the SCA optionally identifies any open security common vulnerabilities and exposures (CVEs), licenses, and out-of-date library versions. The software component analysis tool optionally links an open source library and libraries with known security vulnerabilities into a common vulnerabilities and exposures (CVEs) database. The software component analysis tool scans the source code to find libraries used by the customers. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUILIO MUNGUIA whose telephone number is (571)270-5277. The examiner can normally be reached M-F 9:30AM - 5:00Pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUILIO MUNGUIA/Examiner, Art Unit 2497 /ELENI A SHIFERAW/Supervisory Patent Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Mar 29, 2024
Application Filed
Aug 22, 2025
Non-Final Rejection — §103
Sep 03, 2025
Interview Requested
Sep 10, 2025
Applicant Interview (Telephonic)
Sep 10, 2025
Examiner Interview Summary
Nov 17, 2025
Response Filed
Feb 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12470541
IMAGE FORMING APPARATUS, DISPLAY METHOD, AND RECORDING MEDIUM FOR DISPLAYING AUTHENTICATION METHOD USING EXTERNAL SERVER OR UNIQUE TO IMAGE FORMING APPARATUS
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month