Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
Claim Objections
Claim 29 is objected to because of the following informalities: The claim recites “training given by imonitoring” instead it should be “training given by monitoring”. Appropriate correction is required.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: “application security testing module”, “vulnerability management module”, “ticketing management module”, “training module” in claim 17-21, 18, 28-32.
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Based on applicant’s specification, the “application security testing module”, “vulnerability management module”, “ticketing management module”, and “training module” are being interpreted as general purpose computing hardware and/or software implementing the functions claimed. Applicant is welcome to disprove this analysis with citation to their specification outlining the structure of the above recited modules.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1-32 are rejected under 35 U.S.C. 101 because the claimed invention, under the broadest reasonable interpretation, is directed to an abstract idea without significantly more.
Step 1: Statutory category
Independent claim 1 is drawn to a method. Accordingly, the claim falls under one of the four categories of statutory subject matter (process/method, machines/product/apparatus, manufactures, and composition of matter)
Step 2A: Prong 1: Judicial Exception
Under broadest reasonable interpretation (BRI), Claim 1 is directed to an abstract idea.
Claim 1 recites:
applying an owner identification logic through a plurality of application security testing methodologies; and wherein the plurality of application security testing methodologies comprises Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA) tool (Mental process, human reviewing outcomes of multiple analyses or reports to determine who may be responsible for a software issue)
making an Application Programming Interface (API) call through a vulnerability management platform, to a code repository, where a codebase is stored, to identify the changes made by a developer to the code recently (Mental process/Organizing human activity, human requesting available records or log history from a storage location to review who made recent changes)
assigning a ticket to the identified developer and saving a developer information in the vulnerability management platform (Organizing human activity/Mental process, human assigning a task to a responsible team member and recording their information in a tracking log)
viewing the status of the ticket through a plurality of views; and wherein the plurality of views comprises an insight view, a CTO view, an engineering manager view, and a developer view (Organizing human activity, providing reporting dashboards tailored to different stakeholders for monitoring progress and decision making)
providing training requirements and assessing the effectiveness of the training provided (Organizing human activity/Mental process, human determining whether performance or knowledge gap exist, assigning training, and evaluating whether improvement occurs)
The above limitations collectively recite managing people, reviewing reports, making decisions, assigning responsibility, monitoring progress, and training personnel, all of which are identified types of abstract ideas such as mental process and organizing human activity. See MPEP 2106.04(a).
Step 2A: Prong 2, the additional elements fail to integrate the abstract idea into a practical application. The claim is directed to, or limited to, a technical solution solving a technical problem. They fail to provide an improvement to a technology or the functioning of a computer. See MPEP 2106.04 (d)(1). Instead, the additional elements merely recite, generic concepts such as “API call,” “platform,” “repository,” and “views” as field-of-use limitations and instructions to implement the abstract idea. These recitations simply amount to using standard tools to carry out organizational and analytical decision-making. Thus, the examiner finds the claim does not impose meaningful limits and does not integrate the judicial exception into a practical application. See MPEP 2106.05 (a) (e) (f). As, such the examiner must conclude the invention is not integrated into a practical application.
Step2B: Significantly more
With respect to Step 2B, the claim does not recite significantly more than the abstract idea itself. The claim does not provide an improvement to technology or a specific advancement in computer functionality. The recited elements merely describe conventional and routine information analysis, responsibility assignment, monitoring, and training evaluation concepts long performed in organizational environments.
Accordingly, the examiner finds that the claim elements, individually and as an ordered combination, amount to no more than instructions to apply the abstract idea. See MPEP 2106.05(g). The claim is not patent eligible.
Regarding Claims 2-16, they depend on claim 1 and therefore recite the same abstract idea and additional elements set forth above in claim 1. These claims introduce other new limitations. However, these limitations also cover steps that can be practically performed in human’s mind. The claims 2-16 also recite the Abstract ideas but fail to provide a practical application. These claims are not patent eligible.
Regarding Claim 17, the claim is directed to “A computer-implemented system”, hence is one of the four statutory categories of patent eligible subject matter in step 1. The claim recites similar subject matters as that of claim 1 hence recite similar abstract idea and additional elements type of limitations. Accordingly, for the same rationale set forth in claim 1, the claim 17 recite the abstract idea but fail to provide practical application. The claim 17 is not patent eligible under 101.
Regarding Claims 18-32, they depend on claim 17 and therefore recite the same abstract idea and additional elements set forth above in claim 17. These claims introduce other new limitations. However, these limitations also cover steps that can be practically performed in human’s mind. The claims 18-32 also recite the Abstract ideas but fail to provide a practical application. These claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 7-13, 17-19, 23-29 are rejected under 35 U.S.C. 103 as being unpatentable over Bhalla (US 11379219 B2) in view of Dixon (US 20090070734 A1).
Regarding Claim 1, Bhalla teaches:
applying an owner identification logic through a plurality of application security testing methodologies; and wherein the plurality of application security testing methodologies comprises Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA) tool (Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers’ identity from SCM repository that tracks changes including "User- the user who made the change."; Bhalla, Col 17, lines 37-43, discloses applying multiple application security testing methodologies including SAST and DAST via a code scanner, and Col 15, lines 39-65 Bhalla further teaches software composition/dependency analysis by extracting dependency identities, enumerating transitive ("implied") dependencies, and identifying vulnerabilities and security risks in linked libraries via a dependency management system (e.g., PyPl, Maven, Sonatype Nexus), which corresponds to an SCA tool under the Broadest reasonable interpretation.);
making an Application Programming Interface (API) call through a vulnerability management platform, to a code repository, where a codebase is stored, to identify the changes made by a developer to the code recently (Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers identity from SCM repository that tracks changes through changelog including "User- the user who made the change."; Bhalla, Col 26, lines 18-25, discloses providing programmatic access via an application programming interface (API). Under Broadest Reasonable interpretation, extracting SCM changelog data inherently a programmatic operation, accessing SCM repositories such as GitHub or Bitbucket (Col 16, lines 60-65) is performed by API.);
Wherein task report comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.);
Bhalla does not explicitly teach; However, Dixon teaches:
assigning a task report to the identified developer and saving a developer information in the vulnerability management platform (Dixon, para 56, discloses system to identify which developer is responsible for each modification to the source code and generate compliance violation data; Fig. 4, step 307, discloses the compliance violation data is stored in a database);
viewing the status of the task report through a plurality of views; and wherein the plurality of views comprises an insight view, a CTO view, an engineering manager view, and a developer view (Dixon, para 25, 35, discloses the system provides integrated reporting that allows management to view various quality metrics, including, for example, quality of the project as a whole, quality of each team and groups of developers, and quality of individual developer's work via a Dashboard display.);
providing training requirements and assessing the effectiveness of the training provided (Dixon, para 25, 33, 74, discloses the system provides integrated reporting that allows management to view quality of individual developer's work via a Dashboard display and provide a turnkey solution to quality control issues, including discovery, recommendation, installation, implementation, and training and measure improvement over time.);
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s per-developer attribution, storage, dashboard, training/performance monitoring capabilities because both references address the same recognized problem of managing large scale software quality and accountability using SCM based code history and automated analysis tool. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s developer-responsibility reporting and management oversight features to enhance accountability, remediation tracking, organizational visibility and training effectiveness.
Regarding Claim 2, Bhalla/Dixon teaches the method of claim 1:
wherein the owner identification logic involves utilizing data from the plurality of application security testing methodologies in conjunction with the information obtained from the code repository to identify the specific code, where a security issue is found or vulnerability dependency is defined (Bhalla, Col 17, lines 37-58 discloses that the code scanner 222 can be, for example, a dynamic application security testing scanner (DAST), a static application security testing scanner (SAST), Application Vulnerability Correlation tool, aggregating scanners, Interactive Application Security Testing (IAST) scanners, Runtime Application Security Protection (RASP) scanners, or combination thereof can be used to scan code, extract software context from the source code 206 and SCM repository 216, capture code vulnerabilities, or detect code deficiencies which can confer risk in a software asset or software operating environment.);
and wherein the SAST application security testing methodology is configured to identify vulnerability dependency on the specific code (Bhalla, Col 17, lines 37-58 discloses that the code scanner 222, a static application security testing scanner (SAST) can be used to scan code, extract software context from the source code 206 and SCM repository 216, capture code vulnerabilities, or detect code deficiencies);
and wherein the DAST application security testing methodology is configured to identify vulnerability dependency on the codebase and file or specific code, and using the specific code and the API call to identify the developer who committed the code using the line of code (Bhalla, Col 17, lines 37-58 discloses that the code scanner 222, a dynamic
application security testing scanner (DAST) can be used to scan code, extract software context from the source code 206 and SCM repository 216, capture code vulnerabilities, or detect code deficiencies which can confer risk in a software asset (source code, database, software application, OS, server and so on (Col 5, lines 8-13)); Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers identity from SCM repository that tracks changes through changelog including "User- the user who made the change."; Bhalla, Col 26, lines 18-25, discloses providing programmatic access via an application programming interface (API). Under Broadest Reasonable interpretation, extracting SCM changelog data inherently a programmatic operation, accessing SCM repositories such as GitHub or Bitbucket (Col 16, lines 60-65) is performed by API.);
and wherein the SCA tool is configured to identify the vulnerability dependency and license issues in an open source and software libraries, or components included in the codebase (Bhalla, Col 15, lines 39-65 teaches software composition/dependency analysis by extracting dependency identities, enumerating transitive ("implied") dependencies, and identifying vulnerabilities and security risks in linked libraries via a dependency management system (e.g., PyPl, Maven, Sonatype Nexus), which corresponds to an SCA tool under the Broadest reasonable interpretation; Col 5, lines 14-41, Bhalla's tasks for legal and licensing issues are associated with software assets and their dependencies.);
Bhalla does not explicitly teach; However, Dixon teaches:
Wherein the specific code is an exact line number of a code (Dixon, para 45 discloses that static code analyzer products typically generate detailed data for each compliance violation, including date and time of the violation, the type of violation, and the location of the source code containing the violation).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s per-developer attribution and precise code-location identification capabilities because both references address the same recognized problem of managing large scale software quality and accountability using SCM based code history and automated analysis tool. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s precise code-location identification with Bhalla’s SAST/DAST/SCA-based system to improve traceability, developer accountability, remediation accuracy, organizational visibility and training effectiveness.
Regarding Claim 3, Bhalla/Dixon teaches the method of claim 1:
wherein the Application Programming Interface (API) call made through the vulnerability management platform retrieves the information about the recent changes made to the codebase of the code repository, and also identifies which developer has made changes to the portion of the codebase where the security issue is found or the vulnerability dependency is defined (Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers identity from SCM repository that tracks changes and may introduce any vulnerabilities through changelog including "User- the user who made the change."; Bhalla, Col 26, lines 18-25, discloses providing programmatic access via an application programming interface (API). Under Broadest Reasonable interpretation, extracting SCM changelog data inherently a programmatic operation, accessing SCM repositories such as GitHub or Bitbucket (Col 16, lines 60-65) is performed by API.).
Regarding Claim 7, Bhalla/Dixon teaches the method of claim 1:
wherein the plurality of views are interfaces or different perspectives within the vulnerability management platform, that cater to specific stakeholders involved in the security issue resolution process, and each of the plurality of views provides relevant information and functionalities tailored to the needs of the respective role (Dixon, para 25, 35 discloses the system provides integrated reporting that allows management to view various quality metrics, including, for example, quality of the project as a whole, quality of each team and groups of developers, and quality of individual developer's work via a Dashboard display; para 72-74 discloses multiple GUI (overview page 400, project trend page 500, developers page 600) oriented to different stakeholders/roles from managers to developers, each presenting different information useful to that role).
Regarding Claim 8, Bhalla/Dixon teaches the method of claim 1:
wherein the insight view is a high-level overview or dashboard within the vulnerability management platform that provides a comprehensive summary of the overall security posture of the codebase (Dixon, Fig. 7, para 72 discloses an overview page 400 with small graph 402
therein show the recent behavior of the key quality metric for the development team as a whole. Each of the five tables 404 shows the name of the developer, the value of the relevant metric, the number of days that the alert has been firing and the value of the metric when the alert first fired);
and wherein the insight view is configured to provide findings by the developers, for creating one task report for multiple findings and assigning to the developer directly (Dixon, para 72 discloses the five tables 404, to the left and bottom of the screen, display alerts for any individual developers who have exceeded a prescribed threshold for a metric);
and wherein the insight view is typically designed for executives, security leaders, or other high-level stakeholders who need a quick and easily digestible snapshot of the security status of the organization's software projects (Dixon, para 70-71 discloses graphical user interface (GUI) that allows a software development management to get a quick overview of the various metrics, KPI metrics 240 generated by the quality monitoring system 230 are provided to a manager, or other end user; Under BRI, this is the high level leadership audience and a summarized dashboard.);
and wherein the insight view metrics, charts, and graphs to represent the number of open security issues, their severity, trends over time, and other key security-related data (Dixon, Fig. 7, para 72 discloses overview page 400 with charts and graphs showing behavior of key quality metrics, compliance, severity, activity and so on.);
and wherein the insight view enables decision-makers to assess the overall security health of the software projects and take appropriate actions or allocate resources as needed (Dixon, para 79 discloses the described systems and techniques help to pinpoint actionable steps that assure project success, providing early identification of performance issues and action items, in order to address the progress and behaviors of individual team members);
Dixon does not explicitly teach; However, Bhalla teaches:
Wherein task report comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.)
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s dashboard overview page because both references are directed to managing software quality/security posture and support decision making by higher level stakeholders. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s dashboard overview page visualization and executive level monitoring feature with Bhalla’s system to improve traceability, developer accountability, remediation accuracy, organizational visibility and training effectiveness as well as to support faster strategic decision making.
Regarding Claim 9, Bhalla/Dixon teaches the method of claim 1:
wherein the Chief Technology Officer (CTO) view is a specific interface within the vulnerability management platform tailored for the CTO or other high-level technology executives, and the CTO view is configured to provide individual teams’ performance information including the teams that are doing an excellent job of writing secure code and the teams that are not performing well (Dixon, para 39 discloses the System 200 further includes a graphical user
interface (GUI) 250 that provides a development manager or other authorized user with access to the per-developer KPIs 240; para 72, Dixon discloses the mechanism to distinguish developer having high violation and developer showing very well for specific metric.);
and wherein the CTO view provides a more detailed and strategic perspective compared to the insight view, focusing on the security status and progress of various projects and teams under the CTO's purview (Dixon, para 73 discloses project trend page 500 showing greater level of
details for specific metrics; Fig. 8 shows greater detail levels of violation, test results, coverage and lines of codes and view of compliance report.);
and wherein the CTO view also offers aggregated metrics for multiple projects, allowing the CTO to evaluate the organization's security initiatives, identify potential areas of concern, and make strategic decisions related to security investments, team training, and project priorities (Dixon, para 73 discloses provides project trends, team performance, and individual developer metrics; para 79, Dixon discloses the described systems and techniques help to pinpoint actionable steps that assure project success, providing early identification of performance issues and action items, in order to address the progress and behaviors of individual team members).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s dashboard project trend page because both references are directed to managing software quality/security posture and support decision making by higher level stakeholders. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s dashboard project trend page visualization and executive level monitoring feature with Bhalla’s system to improve traceability, developer accountability, remediation accuracy, organizational visibility and training effectiveness as well as to support faster strategic decision making.
Regarding Claim 10, Bhalla/Dixon teaches the method of claim 1:
wherein the Engineering Manager view is configured to provide the engineering manager, the team’s performance at a developer level (Dixon, para 39 discloses the System 200 further includes a graphical user interface (GUI) 250 that provides a development manager or other authorized user with access to the per-developer KPIs 240; para 74, Dixon discloses FIG. 9 shows a “developers” page 600 that can be used to help assess the performance of developer over a span of time.);
and wherein the Engineering Manager view provides a more granular and operational level of information related to security issues and task assigned to their teams (Dixon, Fig. 9 discloses provides detailed view used to help assess developers’ performance and issues);
and wherein the Engineering manager view is used by the Engineering managers to track the progress of security issue resolutions, monitor the workload and performance of their team members, and ensure that security priorities align with project timelines (Dixon, Fig. 9, para 72 discloses provides detailed view used to help assess developers performance and issues; para 81-82, Dixon discloses systems and techniques also help to ensure productivity of team and meet project deadlines; In addition, managers are able to continuously enforce testing and standards compliance throughout entire development phase.);
Dixon does not explicitly teach; However, Bhalla teaches:
Wherein task comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.)
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s dashboard developers page because both references address oversight of software quality, developer’s accountability. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s dashboard developers page visualization and monitoring feature with Bhalla’s system to allow engineers to operationally track security remediation progress, monitor team health and manage resource allocation and performance in real time.
Regarding Claim 11, Bhalla/Dixon teaches the method of claim 1:
wherein the developer view is a user interface specifically tailored for individual developers who are assigned tickets to address the security issues (Dixon, para 74 discloses a specific UI view developers page 600 dedicated to the individual developer's metric and history);
and wherein the developer view also provides details of the security issues assigned to them, including the exact line number of the code where the security issue is found, along with any supporting information from the plurality of application security testing methodologies (Dixon, para 45, discloses static code analyzer identifying specific file and line of violation in the source code and generating detailed data for each compliance violation, including date and time of the violation, the type of violation, and the location of the source code containing the violation.; para 37, Dixon discloses various tools like static, coverage and Unit to provide information or context for why a piece of code is flagged (para 56, 63, 69));
and wherein the developers use the developer view to access the necessary information to understand the security issue, review any relevant testing results, work on resolving the issue in the codebase (Dixon Fig. 9, para 50, discloses developer see the violation and are required
to fix the pre-existing violations);
and also to update the task status, communicate with other team members, and provide feedback or additional information related to the security issue (Dixon, para 41, when the developer fixes code, the system recalculates/updates the status and para 75, Dixon discloses the feedback loop between the system and developers);
and wherein the developer view also provides information on individual training requirements to the developer, coding improvement over time with respect to writing secure code (Dixon, para 25, 33, 74, discloses the system provides integrated reporting that allows management to view quality of individual developer's work via a Dashboard display and provide a turnkey solution to quality control issues, including discovery, recommendation, installation, implementation, and training and measure improvement over time).
Dixon does not explicitly teach; However, Bhalla teaches:
Wherein task comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.)
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s dashboard developers page because both references address oversight of software quality, developer’s accountability. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s dashboard developers page visualization and monitoring feature with Bhalla’s system to allow assigned developer to directly view security issues, evidence, and progress, and receive individualized performance and training guidance, thereby enhancing usability, accountability, transparency and continuous quality improvement.
Regarding Claim 12, Bhalla/Dixon teaches the method of claim 1:
wherein the steps of providing training requirements and assessing the effectiveness of the training, comprises the steps of: a. mapping vulnerabilities or security issues to a training category; and wherein the mapping the security issues to training category involves mapping each of the security issue to a corresponding training category, to determine the type of training needed to address and prevent similar security issues in the future (Dixon, para 46 discloses categorizes violations (under BRI proxies for security issues) by types and priorities like low, medium and high priority; para 25, Dixon discloses providing turnkey solution for training and under Broadest reasonable interpretation, these categories serves as the basis for training category.);
b. identifying individual training requirements; and wherein the individual training requirements is determined by the CTO for an individual developer; and wherein the individual training provides an identified individual developer who modified the code and introduced a particular security vulnerability, the classification system pinpoints the type of training the individual developer needs, to write more secure code (Dixon, para 22 discloses that once an issue is discovered on a per-developer basis, the system provides a recommendation for training tailored to the individual' specific feature under broadest reasonable interpretation (para 25); para 56, Dixon discloses version control system is used to identify which developer is responsible for each modification to the source code. In step 302, a code analysis tool is used to generate compliance violations data);
c. evaluating training effectiveness (Dixon, para 35 discloses that the trend of number of errors detected shows if there has been any improvement or not);
d. recording the training provided and progress (Dixon, para 73 discloses the large graph 502 in FIG. 8 shows the performance of each developer on the team over time.);
e. aggregating training effectiveness data; and wherein the aggregation of training effectiveness data allows the vulnerability management platform to identify specific training areas or categories where improvements are needed at both an individual and team level (Dixon, para 25 discloses aggregated data at three levels: Individual, Team and Project. This allows the system to see if a training area (like unit testing) needs improvement across the entire team or just for one individual).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s structured per-developer metric tracking, categorization, and trend-analysis capabilities in order to support mapping issues to training categories, training assignment and effectiveness assessment. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s structured per-developer metric tracking, categorization, and trend-analysis capabilities in order to support mapping issues to training categories, training assignment and effectiveness assessment with Bhalla’s system to assign appropriate training, record progress and confirm whether the training are actually effective thereby enhancing accountability, governance, and continuous security.
Regarding Claim 13, Bhalla/Dixon teaches the method of claim 1:
wherein the evaluating training effectiveness involves monitoring the developers who received training on specific security topics (Dixon, para 41 discloses that the system is operable to periodically communicate with the version control subsystem for updates which implies it monitors subsequent code changes by downloading revised code then recalculate the KPIs to see if the developer's behavior has changed under Broadest reasonable interpretation);
and assessing the developer on subsequent code changes, if similar vulnerabilities are still introduced or if improvements are observed post-training (Dixon, para 35 uses trend information to distinguish between improvement (the goal of the training) and decline (the failure of training or gaps));
and wherein by assessing the developer post-training and by measuring the type of security issues introduced by the same developer after training (Dixon, para 73 discloses tracking specific metrics over time which allows the system to see if a developer is still introducing the same type of violation they were previously flagged for (para 69));
and mapping the security issues to the training category the developer received, the vulnerability management platform identifies gaps in knowledge or assess the effectiveness of the training provided (Dixon, para 46 discloses categorizes violations (under BRI proxies for security
issues) by types and priorities like low, medium and high priority; para 25, Dixon discloses providing turnkey solution for training and under Broadest reasonable interpretation, these categories serve as the basis for training category; Dixon, para 35 uses trend information to distinguish between improvement (the goal of the training) and decline (the failure of training or gaps)).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s structured per-developer metric tracking, categorization, and trend-analysis capabilities in order to support mapping issues to training categories, training assignment and effectiveness assessment. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s structured per-developer metric tracking, categorization, and trend-analysis capabilities in order to support mapping issues to training categories, training assignment and effectiveness assessment with Bhalla’s system to assign appropriate training, record progress and confirm whether the training are actually effective thereby enhancing accountability, governance, and continuous security.
Regarding Claim 17, Bhalla teaches:
a. an application security testing module configured to apply an owner identification logic through a plurality of application security testing methodologies; and wherein the plurality of application security testing methodologies comprises Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Software Composition Analysis (SCA) tool; (Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers’ identity from SCM repository that tracks changes including "User- the user who made the change."; Bhalla, Col 17, lines 37-43, discloses applying multiple application security testing methodologies including SAST and DAST via a code scanner, and Col 15, lines 39-65 Bhalla further teaches software composition/dependency analysis by extracting dependency identities, enumerating transitive ("implied") dependencies, and identifying vulnerabilities and security risks in linked libraries via a dependency management system (e.g., PyPl, Maven, Sonatype Nexus), which corresponds to an SCA tool under the Broadest reasonable interpretation.);
b. a vulnerability management module configured to make an Application Programming Interface (API) call through a vulnerability management platform, to a code repository, where a codebase is stored, to identify the changes made by a developer to the code recently; (Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers identity from SCM repository that tracks changes through changelog including "User- the user who made the change."; Bhalla, Col 26, lines 18-25, discloses providing programmatic access via an application programming interface (API). Under Broadest Reasonable interpretation, extracting SCM changelog data inherently a programmatic operation, accessing SCM repositories such as GitHub or Bitbucket (Col 16, lines 60-65) is performed by API.);
Wherein task report comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.);
Bhalla does not explicitly teach; However, Dixon teaches:
c. a task reporting management module configured to assign a task report to the identified developer and saving a developer information in the vulnerability management platform of the vulnerability management module, (Dixon, para 56, discloses system to identify which developer is responsible for each modification to the source code and generate compliance violation data; Fig. 4, step 307, discloses the compliance violation data is stored in a database);
and to view the status of the task report through a plurality of views; and wherein the plurality of views comprises an insight view, a CTO view, an engineering manager view, and a developer view; and (Dixon, para 25, 35, discloses the system provides integrated reporting that allows management to view various quality metrics, including, for example, quality of the project as a whole, quality of each team and groups of developers, and quality of individual developer's work via a Dashboard display.);
d. a training module configured to provide training requirements and assess the effectiveness of the training provided. (Dixon, para 25, 33, 74, discloses the system provides integrated reporting that allows management to view quality of individual developer's work via a Dashboard display and provide a turnkey solution to quality control issues, including discovery, recommendation, installation, implementation, and training and measure improvement over time.);
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s per-developer attribution, storage, dashboard, training/performance monitoring capabilities because both references address the same recognized problem of managing large scale software quality and accountability using SCM based code history and automated analysis tool. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s developer-responsibility reporting and management oversight features to enhance accountability, remediation tracking, organizational visibility and training effectiveness.
Regarding Claim 18, Bhalla/Dixon teaches the system according to claim 17:
wherein the application security testing module applies the owner identification logic for utilizing data from the plurality of application security testing methodologies in conjunction with the information obtained from the code repository to identify an exact line number of a code, where a security issue is found or vulnerability dependency is defined; (Bhalla, Col 17, lines 37-58 discloses that the code scanner 222 can be, for example, a dynamic application security testing scanner (DAST), a static application security testing scanner (SAST), Application Vulnerability Correlation tool, aggregating scanners, Interactive Application Security Testing (IAST) scanners, Runtime Application Security Protection (RASP) scanners, or combination thereof can be used to scan code, extract software context from the source code 206 and SCM repository 216, capture code vulnerabilities, or detect code deficiencies which can confer risk in a software asset or software operating environment.);
and wherein the SAST application security testing methodology is configured to identify vulnerability dependency on the specific code (Bhalla, Col 17, lines 37-58 discloses that the code scanner 222, a static application security testing scanner (SAST) can be used to scan code, extract software context from the source code 206 and SCM repository 216, capture code vulnerabilities, or detect code deficiencies);
and wherein the DAST application security testing methodology is configured to identify vulnerability dependency on the codebase and file or specific code, and using the specific code and the API call to identify the developer who committed the code using the line of code (Bhalla, Col 17, lines 37-58 discloses that the code scanner 222, a dynamic
application security testing scanner (DAST) can be used to scan code, extract software context from the source code 206 and SCM repository 216, capture code vulnerabilities, or detect code deficiencies which can confer risk in a software asset (source code, database, software application, OS, server and so on (Col 5, lines 8-13)); Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers identity from SCM repository that tracks changes through changelog including "User- the user who made the change."; Bhalla, Col 26, lines 18-25, discloses providing programmatic access via an application programming interface (API). Under Broadest Reasonable interpretation, extracting SCM changelog data inherently a programmatic operation, accessing SCM repositories such as GitHub or Bitbucket (Col 16, lines 60-65) is performed by API.);
and wherein the SCA tool is configured to identify the vulnerability dependency and license issues in an open source and software libraries, or components included in the codebase (Bhalla, Col 15, lines 39-65 teaches software composition/dependency analysis by extracting dependency identities, enumerating transitive ("implied") dependencies, and identifying vulnerabilities and security risks in linked libraries via a dependency management system (e.g., PyPl, Maven, Sonatype Nexus), which corresponds to an SCA tool under the Broadest reasonable interpretation; Col 5, lines 14-41, Bhalla's tasks for legal and licensing issues are associated with software assets and their dependencies.);
Bhalla does not explicitly teach; However, Dixon teaches:
Wherein the specific code is an exact line number of a code (Dixon, para 45 discloses that static code analyzer products typically generate detailed data for each compliance violation, including date and time of the violation, the type of violation, and the location of the source code containing the violation).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s per-developer attribution and precise code-location identification capabilities because both references address the same recognized problem of managing large scale software quality and accountability using SCM based code history and automated analysis tool. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s precise code-location identification with Bhalla’s SAST/DAST/SCA-based system to improve traceability, developer accountability, remediation accuracy, organizational visibility and training effectiveness.
Regarding Claim 19, Bhalla/Dixon teaches the system according to claim 17:
wherein the Application Programming Interface (API) call made through the vulnerability management platform embedded in the vulnerability management module, retrieves the information about the recent changes made to the codebase of the code repository, and also identifies which developer has made changes to the portion of the codebase where the security issue is found or the vulnerability dependency is defined. (Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers identity from SCM repository that tracks changes and may introduce any vulnerabilities through changelog including "User- the user who made the change."; Bhalla, Col 26, lines 18-25, discloses providing programmatic access via an application programming interface (API). Under Broadest Reasonable interpretation, extracting SCM changelog data inherently a programmatic operation, accessing SCM repositories such as GitHub or Bitbucket (Col 16, lines 60-65) is performed by API.).
Regarding Claim 23, Bhalla/Dixon teaches the system according to claim 17:
wherein the plurality of views to view the status of the task assigned by the task management module are interfaces or different perspectives within the vulnerability management platform, that cater to specific stakeholders involved in the security issue resolution process, and each of the plurality of views provides relevant information and functionalities tailored to the needs of the respective role. (Dixon, para 25, 35 discloses the system provides integrated reporting that allows management to view various quality metrics, including, for example, quality of the project as a whole, quality of each team and groups of developers, and quality of individual developer's work via a Dashboard display; para 72-74 discloses multiple GUI (overview page 400, project trend page 500, developers page 600) oriented to different stakeholders/roles from managers to developers, each presenting different information useful to that role).
Dixon does not explicitly teach; However, Bhalla teaches:
Wherein task comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.)
Regarding Claim 24, Bhalla/Dixon teaches the system according to claim 17:
wherein the insight view is a high-level overview or dashboard within the vulnerability management platform that provides a comprehensive summary of the overall security posture of the codebase (Dixon, Fig. 7, para 72 discloses an overview page 400 with small graph 402
therein show the recent behavior of the key quality metric for the development team as a whole. Each of the five tables 404 shows the name of the developer, the value of the relevant metric, the number of days that the alert has been firing and the value of the metric when the alert first fired);
and wherein the insight view is configured to provide findings by the developers, for creating one task report for multiple findings and assigning to the developer directly (Dixon, para 72 discloses the five tables 404, to the left and bottom of the screen, display alerts for any individual developers who have exceeded a prescribed threshold for a metric);
and wherein the insight view is typically designed for executives, security leaders, or other high-level stakeholders who need a quick and easily digestible snapshot of the security status of the organization's software projects (Dixon, para 70-71 discloses graphical user interface (GUI) that allows a software development management to get a quick overview of the various metrics, KPI metrics 240 generated by the quality monitoring system 230 are provided to a manager, or other end user; Under BRI, this is the high level leadership audience and a summarized dashboard.);
and wherein the insight view include metrics, charts, and graphs to represent the number of open security issues, their severity, trends over time, and other key security-related data (Dixon, Fig. 7, para 72 discloses overview page 400 with charts and graphs showing behavior of key quality metrics, compliance, severity, activity and so on.);
and wherein the insight view enables decision-makers to assess the overall security health of the software projects and take appropriate actions or allocate resources as needed (Dixon, para 79 discloses the described systems and techniques help to pinpoint actionable steps that assure project success, providing early identification of performance issues and action items, in order to address the progress and behaviors of individual team members);
Dixon does not explicitly teach; However, Bhalla teaches:
Wherein task report comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.)
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s dashboard overview page because both references are directed to managing software quality/security posture and support decision making by higher level stakeholders. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s dashboard overview page visualization and executive level monitoring feature with Bhalla’s system to improve traceability, developer accountability, remediation accuracy, organizational visibility and training effectiveness as well as to support faster strategic decision making.
Regarding Claim 25, Bhalla/Dixon teaches the system according to claim 17:
wherein the Chief Technology Officer (CTO) view is a specific interface within the vulnerability management platform tailored for the CTO or other high-level technology executives, and the CTO view is configured to provide individual teams’ performance information including the teams that are doing an excellent job of writing secure code and the teams that are not performing well (Dixon, para 39 discloses the System 200 further includes a graphical user
interface (GUI) 250 that provides a development manager or other authorized user with access to the per-developer KPIs 240; para 72, Dixon discloses the mechanism to distinguish developer having high violation and developer showing very well for specific metric.);
and wherein the CTO view provides a more detailed and strategic perspective compared to the insight view, focusing on the security status and progress of various projects and teams under the CTO's purview (Dixon, para 73 discloses project trend page 500 showing greater level of
details for specific metrics; Fig. 8 shows greater detail levels of violation, test results, coverage and lines of codes and view of compliance report.);
and wherein the CTO view also offers aggregated metrics for multiple projects, allowing the CTO to evaluate the organization's security initiatives, identify potential areas of concern, and make strategic decisions related to security investments, team training, and project priorities (Dixon, para 73 discloses provides project trends, team performance, and individual developer metrics; para 79, Dixon discloses the described systems and techniques help to pinpoint actionable steps that assure project success, providing early identification of performance issues and action items, in order to address the progress and behaviors of individual team members).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s dashboard project trend page because both references are directed to managing software quality/security posture and support decision making by higher level stakeholders. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s dashboard project trend page visualization and executive level monitoring feature with Bhalla’s system to improve traceability, developer accountability, remediation accuracy, organizational visibility and training effectiveness as well as to support faster strategic decision making.
Regarding Claim 26, Bhalla/Dixon teaches the system according to claim 17:
wherein the Engineering Manager view is configured to provide the engineering manager, the team’s performance at a developer level (Dixon, para 39 discloses the System 200 further includes a graphical user interface (GUI) 250 that provides a development manager or other authorized user with access to the per-developer KPIs 240; para 74, Dixon discloses FIG. 9 shows a “developers” page 600 that can be used to help assess the performance of developer over a span of time.);
and wherein the Engineering Manager view provides a more granular and operational level of information related to security issues and task assigned to their teams (Dixon, Fig. 9 discloses provides detailed view used to help assess developers’ performance and issues);
and wherein the Engineering manager view is used by the Engineering managers to track the progress of security issue resolutions, monitor the workload and performance of their team members, and ensure that security priorities align with project timelines (Dixon, Fig. 9, para 72 discloses provides detailed view used to help assess developers performance and issues; para 81-82, Dixon discloses systems and techniques also help to ensure productivity of team and meet project deadlines; In addition, managers are able to continuously enforce testing and standards compliance throughout entire development phase.);
Dixon does not explicitly teach; However, Bhalla teaches:
Wherein task comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.)
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s dashboard developers page because both references address oversight of software quality, developer’s accountability. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s dashboard developers page visualization and monitoring feature with Bhalla’s system to allow engineers to operationally track security remediation progress, monitor team health and manage resource allocation and performance in real time.
Regarding Claim 27, Bhalla/Dixon teaches the system according to claim 17:
wherein the developer view is a user interface specifically tailored for individual developers who are assigned tickets to address the security issues (Dixon, para 74 discloses a specific UI view developers page 600 dedicated to the individual developer's metric and history);
and wherein the developer view also provides details of the security issues assigned to them, including the exact line number of the code where the security issue is found, along with any supporting information from the plurality of application security testing methodologies (Dixon, para 45, discloses static code analyzer identifying specific file and line of violation in the source code and generating detailed data for each compliance violation, including date and time of the violation, the type of violation, and the location of the source code containing the violation.; para 37, Dixon discloses various tools like static, coverage and Unit to provide information or context for why a piece of code is flagged (para 56, 63, 69));
and wherein the developers use the developer view to access the necessary information to understand the security issue, review any relevant testing results, work on resolving the issue in the codebase (Dixon Fig. 9, para 50, discloses developer see the violation and are required
to fix the pre-existing violations);
and also to update the task status, communicate with other team members, and provide feedback or additional information related to the security issue (Dixon, para 41, when the developer fixes code, the system recalculates/updates the status and para 75, Dixon discloses the feedback loop between the system and developers);
and wherein the developer view also provides information on individual training requirements to the developer, coding improvement over time with respect to writing secure code (Dixon, para 25, 33, 74, discloses the system provides integrated reporting that allows management to view quality of individual developer's work via a Dashboard display and provide a turnkey solution to quality control issues, including discovery, recommendation, installation, implementation, and training and measure improvement over time).
Dixon does not explicitly teach; However, Bhalla teaches:
Wherein task comprises ticket (Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.)
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s dashboard developers page because both references address oversight of software quality, developer’s accountability. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s dashboard developers page visualization and monitoring feature with Bhalla’s system to allow assigned developer to directly view security issues, evidence, and progress, and receive individualized performance and training guidance, thereby enhancing usability, accountability, transparency and continuous quality improvement.
Regarding Claim 28, Bhalla/Dixon teaches the system according to claim 17:
wherein the training module while providing training requirements and assessing the effectiveness of the training is configured for: a. mapping vulnerabilities or security issues to a training category; and wherein the mapping the security issues to training category involves mapping each of the security issue to a corresponding training category, to determine the type of training needed to address and prevent similar security issues in the future (Dixon, para 46 discloses categorizes violations (under BRI proxies for security issues) by types and priorities like low, medium and high priority; para 25, Dixon discloses providing turnkey solution for training and under Broadest reasonable interpretation, these categories serves as the basis for training category.);
b. identifying individual training requirements; and wherein the individual training requirements is determined by the CTO for an individual developer; and wherein the individual training provides an identified individual developer who modified the code and introduced a particular security vulnerability, the classification system pinpoints the type of training the individual developer needs, to write more secure code (Dixon, para 22 discloses that once an issue is discovered on a per-developer basis, the system provides a recommendation for training tailored to the individual' specific feature under broadest reasonable interpretation (para 25); para 56, Dixon discloses version control system is used to identify which developer is responsible for each modification to the source code. In step 302, a code analysis tool is used to generate compliance violations data);
c. evaluating training effectiveness (Dixon, para 35 discloses that the trend of number of errors detected shows if there has been any improvement or not);
d. recording the training provided and progress (Dixon, para 73 discloses the large graph 502 in FIG. 8 shows the performance of each developer on the team over time.);
e. aggregating training effectiveness data; and wherein the aggregation of training effectiveness data allows the vulnerability management platform to identify specific training areas or categories where improvements are needed at both an individual and team level (Dixon, para 25 discloses aggregated data at three levels: Individual, Team and Project. This allows the system to see if a training area (like unit testing) needs improvement across the entire team or just for one individual).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s structured per-developer metric tracking, categorization, and trend-analysis capabilities in order to support mapping issues to training categories, training assignment and effectiveness assessment. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s structured per-developer metric tracking, categorization, and trend-analysis capabilities in order to support mapping issues to training categories, training assignment and effectiveness assessment with Bhalla’s system to assign appropriate training, record progress and confirm whether the training are actually effective thereby enhancing accountability, governance, and continuous security.
Regarding Claim 29, Bhalla/Dixon teaches the system according to claim 17:
wherein the training module, is configured to evaluate an effectiveness of training given by monitoring the developers who received training on specific security topics, (Dixon, para 41 discloses that the system is operable to periodically communicate with the version control subsystem for updates which implies it monitors subsequent code changes by downloading revised code then recalculate the KPIs to see if the developer's behavior has changed under Broadest reasonable interpretation);
and assessing the developer on subsequent code changes, if similar vulnerabilities are still introduced or if improvements are observed post-training (Dixon, para 35 uses trend information to distinguish between improvement (the goal of the training) and decline (the failure of training or gaps));
and wherein by assessing the developer post-training and by measuring the type of security issues introduced by the same developer after training (Dixon, para 73 discloses tracking specific metrics over time which allows the system to see if a developer is still introducing the same type of violation they were previously flagged for (para 69));
and mapping the security issues to the training category the developer received, the vulnerability management platform identifies gaps in knowledge or assess the effectiveness of the training provided (Dixon, para 46 discloses categorizes violations (under BRI proxies for security
issues) by types and priorities like low, medium and high priority; para 25, Dixon discloses providing turnkey solution for training and under Broadest reasonable interpretation, these categories serve as the basis for training category; Dixon, para 35 uses trend information to distinguish between improvement (the goal of the training) and decline (the failure of training or gaps)).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla’s system to incorporate Dixon’s structured per-developer metric tracking, categorization, and trend-analysis capabilities in order to support mapping issues to training categories, training assignment and effectiveness assessment. One would be motivated to perform such modification to Bhalla’s system to integrate Dixon’s structured per-developer metric tracking, categorization, and trend-analysis capabilities in order to support mapping issues to training categories, training assignment and effectiveness assessment with Bhalla’s system to assign appropriate training, record progress and confirm whether the training are actually effective thereby enhancing accountability, governance, and continuous security.
Claims 4 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bhalla (US 11379219 B2) in view of Dixon (US 20090070734 A1) in view of Schroeder (US 20200192780 A1).
Regarding Claim 4, Bhalla/Dixon teaches the method of claim 1:
Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.
Dixon does not explicitly teach; However, Schroeder teaches,
wherein the ticket assigned to the identified developer is updated in a tracking system to track the security issue and its resolution; and wherein the security issue is assigned to the identified developer for further investigation and resolution (Schroeder, para 24, discloses generating and assigning a ticket to the responsible developer (e.g., the developer who checked in the code that caused the regression) using issue-tracking system like Jira to track the issue. The responsible developer then reads such notifications/alerts and determine the cause of regression).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Schroeder’s issue tracking and resolution workflow. One would be motivated to perform such modification to Bhalla/Dixon’s system to integrate Schroeder’s assigning the ticket to the responsible developer and continuously updating that ticket within an external tracking system (Such as Jira) to monitor investigation progress and resolution of the defect or regression in order to provide an end-to-end remediation workflow, automatically assigning identified issues to the correct developer and maintaining ongoing resolution visibility, and ensuring accountability.
Regarding Claim 20, Bhalla/Dixon teaches the system according to claim 17:
Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.
Dixon does not explicitly teach; However, Schroeder teaches,
Wherein the ticketing management module updates the ticket assigned to the identified developer in a tracking system to track the security issue and its resolution; and wherein the security issue is assigned to the identified developer for further investigation and resolution. (Schroeder, para 24, discloses generating and assigning a ticket to the responsible developer (e.g., the developer who checked in the code that caused the regression) using issue-tracking system like Jira to track the issue. The responsible developer then reads such notifications/alerts and determine the cause of regression).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Schroeder’s issue tracking and resolution workflow. One would be motivated to perform such modification to Bhalla/Dixon’s system to integrate Schroeder’s assigning the ticket to the responsible developer and continuously updating that ticket within an external tracking system (Such as Jira) to monitor investigation progress and resolution of the defect or regression in order to provide an end-to-end remediation workflow, automatically assigning identified issues to the correct developer and maintaining ongoing resolution visibility, and ensuring accountability.
Claims 5 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Bhalla (US 11379219 B2) in view of Dixon (US 20090070734 A1) in view of Schroeder (US 20200192780 A1) in view of Forth (US 20070282660 A1).
Regarding Claim 5, Bhalla/Dixon teaches the method of claim 1:
Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.
Dixon does not explicitly teach; However, Schroeder teaches,
wherein the ticket is directly assigned to the identified developer, if the identified developer is registered on the vulnerability management platform (Schroeder, para 21 discloses issue (e.g., bug) tracking system creates a ticket in the system describing the regression; para 24, Schroeder discloses generating and assigning a ticket to the responsible developer (e.g., the developer who checked in the code that caused the regression) using issue-tracking system like Jira to track the issue. para 27, Schroeder implies the user with access to the platform UI being registered user.);
and wherein an email address is used for ticket assignment and identification (Schroeder, para 24, 27 further teaches discloses the use of email notification method when a direct platform action like a pull request triggers a report.);
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Schroeder’s direct ticket assignment, platform-registered developer workflow and integrated notification mechanism. One would be motivated to perform such modification to Bhalla/Dixon’s system to streamline remediation, ensure automated routing of responsibility to the correct registered user, improve traceability, and leverage standard enterprise defect-tracking practices.
Schroeder does not explicitly teach; However, Forth teaches,
and wherein an email address is used for task assignment and identification, if the identified developer is not registered on the vulnerability management platform (Forth, para 35 discloses using email of the assignee to assign tasks if the assignee is not registered with the service).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon/Schroeder’s system to incorporate Forth’s fallback email-based assignment mechanism for unregistered users. Schroeder teaches direct ticket assignment, platform-registered developer workflow and integrated notification mechanism. Forth enables ticket/task assignment and identification through email even when the assignee is not registered. One would be motivated to perform such modification to Bhalla/Dixon/Schroeder’s system to ensure continuity of remediation workflows, usability, maintain accountability when platform user registration is incomplete.
Regarding Claim 21, Bhalla/Dixon teaches the system according to claim 17:
Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.
Dixon does not explicitly teach; However, Schroeder teaches,
wherein the ticket is directly assigned to the identified developer, if the identified developer is registered on the vulnerability management platform embedded in the vulnerability management module; (Schroeder, para 21 discloses issue (e.g., bug) tracking system creates a ticket in the system describing the regression; para 24, Schroeder discloses generating and assigning a ticket to the responsible developer (e.g., the developer who checked in the code that caused the regression) using issue-tracking system like Jira to track the issue. para 27, Schroeder implies the user with access to the platform UI being registered user.);
and wherein an email address is used for ticket assignment and identification (Schroeder, para 24, 27 further teaches discloses the use of email notification method when a direct platform action like a pull request triggers a report.);
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Schroeder’s direct ticket assignment, platform-registered developer workflow and integrated notification mechanism. One would be motivated to perform such modification to Bhalla/Dixon’s system to streamline remediation, ensure automated routing of responsibility to the correct registered user, improve traceability, and leverage standard enterprise defect-tracking practices.
Schroeder does not explicitly teach; However, Forth teaches,
and wherein an email address is used for ticket assignment and identification if the identified developer is not registered on the vulnerability management platform embedded in the vulnerability management module. (Forth, para 35 discloses using email of the assignee to assign tasks if the assignee is not registered with the service).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon/Schroeder’s system to incorporate Forth’s fallback email-based assignment mechanism for unregistered users. Schroeder teaches direct ticket assignment, platform-registered developer workflow and integrated notification mechanism. Forth enables ticket/task assignment and identification through email even when the assignee is not registered. One would be motivated to perform such modification to Bhalla/Dixon/Schroeder’s system to ensure continuity of remediation workflows, usability, maintain accountability when platform user registration is incomplete.
Claims 6 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Bhalla (US 11379219 B2) in view of Dixon (US 20090070734 A1) in view of Schmidt (US 20070185754 A1).
Regarding Claim 6, Bhalla/Dixon teaches the method of claim 1,
Dixon, para 56, discloses system to identify which developer is responsible for each modification to the source code and generate compliance violation data;
Bhalla teaches:
wherein the developer information includes the developer who last amended
the code or file or code repository (Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers’ identity from SCM repository that tracks changes through changelog including "User- the user who made the change.");
Bhalla does not explicitly teach; However, Schmidt teaches:
and wherein if the developer information is not available on the vulnerability management platform, then an engineering owner of a product or security owner or business owner as configured during product creation is assigned the ticket (Schmidt, para 61 discloses if the absence of the user who is currently assigned the task is absent, the owner receives the task and makes a decision for delegation.);
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Schmidt fallback ownership reassignment mechanism. Bhalla identifies the responsible developer from SCM history, and Dixon already ties accountability, monitoring, and remediation performance to specific developer. Schmidt teaches routing responsibility to a designated higher-level owner who assumes decision authority for resolution or delegation. One would be motivated to perform such modification to Bhalla/Dixon’s system to maintain continuity of remediation, ensure accountability does not stall and supports policy driven task redirection to yield improved robustness, governance, operational reliability.
Bhalla/Schmidt does not explicitly teach; However, Dixon teaches:
and in default a default assignee or the developer as configured in a rulebook is also assigned with the ticket (Dixon, para 49-50 discloses identified developer is the default developer to
get assigned the task).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Schmidt’s system to incorporate Dixon’s technique in which an identified developer function is as a default responsible party. One would be motivated to perform such modification to Bhalla/ Schmidt’s system to create a deterministic, rule driven assignment process that selects the appropriate default developer which improves automation reliability, reduces manual intervention, and enhances accountability tracking.
Regarding Claim 22, Bhalla/Dixon teaches the system according to claim 17,
Dixon, para 56, discloses system to identify which developer is responsible for each modification to the source code and generate compliance violation data;
Bhalla teaches:
wherein the developer information includes the developer who last amended
the code or file or code repository (Bhalla, Col 16, lines 52- Col 17 lines 7, discloses extracting developers’ identity from SCM repository that tracks changes through changelog including "User- the user who made the change.");
Bhalla does not explicitly teach; However, Schmidt teaches:
and wherein if the developer information is not available on the vulnerability management platform, then an engineering owner of a product or security owner or business owner as configured during product creation is assigned the ticket (Schmidt, para 61 discloses if the absence of the user who is currently assigned the task is absent, the owner receives the task and makes a decision for delegation.);
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Schmidt fallback ownership reassignment mechanism. Bhalla identifies the responsible developer from SCM history, and Dixon already ties accountability, monitoring, and remediation performance to specific developer. Schmidt teaches routing responsibility to a designated higher-level owner who assumes decision authority for resolution or delegation. One would be motivated to perform such modification to Bhalla/Dixon’s system to maintain continuity of remediation, ensure accountability does not stall and supports policy driven task redirection to yield improved robustness, governance, operational reliability.
Bhalla/Schmidt does not explicitly teach; However, Dixon teaches:
and in default a default assignee or the developer as configured in a rulebook is also assigned with the ticket (Dixon, para 49-50 discloses identified developer is the default developer to
get assigned the task).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Schmidt’s system to incorporate Dixon’s technique in which an identified developer function is as a default responsible party. One would be motivated to perform such modification to Bhalla/ Schmidt’s system to create a deterministic, rule driven assignment process that selects the appropriate default developer which improves automation reliability, reduces manual intervention, and enhances accountability tracking.
Claims 14 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Bhalla (US 11379219 B2) in view of Dixon (US 20090070734 A1) in view of Wong (US 20200211135 A1).
Regarding Claim 14, Bhalla/Dixon teaches the method according to claim 12,
Bhalla/Dixon does not explicitly teach; However, Wong teaches:
wherein the recording of the training provided and progress involves capturing training details including training date, type of training, and attendees (Wong, para 35 discloses storing/recording user profile in memory 105, user profile includes information regarding one or more training sequences completed by the developer. The information regarding the one or more training sequences comprises one or more of a topic of the training sequence, a difficulty level of the training sequence, a content of the training sequence, a success rate of the training sequence, a time the training sequence was performed, a duration of time taken to complete the training sequence, a quantity of errors that occurred while performing a training sequence, a quantity of compliances that occurred while performing the training sequence, or some other suitable information indicative of a developer's level of performance while attempting to complete the training sequence.);
and wherein the recording training events and progress also enable easy tracking of individual developer’s progress over time, indicating how their coding practices have improved post-training (Wong, para 46 discloses analyzing a developer's progress over time;
para 93, Wong discloses tracking and re-assessing the developer's competency has improved over time after the training).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Wong’s technique of training-recording and historical progress-tracking capabilities. Wong captures detailed training records including training sequence, a difficulty level of the training sequence, a content of the training sequence, a success rate of the training sequence, a time the training sequence was performed, a duration of time taken to complete the training sequence, a quantity of errors that occurred while performing a training sequence. One would be motivated to perform such modification to Bhalla/Dixon’s system to enable closed loop remediation management, recording training events, evidence of participation and demonstrating measurable improvement to enhance accountability, compliance reporting and governance consistency.
Regarding Claim 30, Bhalla/Dixon teaches the system according to claim 28,
Bhalla/Dixon does not explicitly teach; However, Wong teaches:
wherein the training module is configured to the training and progress by capturing training details including training date, type of training, and attendees (Wong, para 35 discloses storing/recording user profile in memory 105, user profile includes information regarding one or more training sequences completed by the developer. The information regarding the one or more training sequences comprises one or more of a topic of the training sequence, a difficulty level of the training sequence, a content of the training sequence, a success rate of the training sequence, a time the training sequence was performed, a duration of time taken to complete the training sequence, a quantity of errors that occurred while performing a training sequence, a quantity of compliances that occurred while performing the training sequence, or some other suitable information indicative of a developer's level of performance while attempting to complete the training sequence.);
and wherein the recording training events and progress also enables easy tracking of individual developer’s progress over time, indicating how their coding practices have improved post-training (Wong, para 46 discloses analyzing a developer's progress over time;
para 93, Wong discloses tracking and re-assessing the developer's competency has improved over time after the training).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Wong’s technique of training-recording and historical progress-tracking capabilities. Wong captures detailed training records including training sequence, a difficulty level of the training sequence, a content of the training sequence, a success rate of the training sequence, a time the training sequence was performed, a duration of time taken to complete the training sequence, a quantity of errors that occurred while performing a training sequence. One would be motivated to perform such modification to Bhalla/Dixon’s system to enable closed loop remediation management, recording training events, evidence of participation and demonstrating measurable improvement to enhance accountability, compliance reporting and governance consistency.
Claims 15-16 and 31-32 are rejected under 35 U.S.C. 103 as being unpatentable over Bhalla (US 11379219 B2) in view of Dixon (US 20090070734 A1) in view of Pezaris (US 20210124561 A1).
Regarding Claim 15, Bhalla/Dixon teaches the method of claim 1,
Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.
Bhalla/Dixon does not explicitly teach; However, Pezaris teaches:
wherein the vulnerability management platform also provides an option of auto-suggest/assignment of the owner, while creating a ticket manually (Pezaris, para 77 discloses users are allowed to create/issue tickets; Para 52, Pezaris discloses that the user may be provided with a list of suggested or recommended reviewers 104 to include and assign the code review to. The suggestion may be based on the organizational structure of the development team, original authors of the code, teams assigned to the modified code, developers working on similar features/issues/ticket or any combination thereof.);
and wherein the auto-suggestion of the owner of a ticket is performed when the ticket is created automatically through the runbook (Pezaris, para 55 discloses automated workflow steps around tickets/issues, when certain action occur (e.g., user request review, or branch/pull-request workflow), the system automatically packages changes and updates task state. The submission of the code review may trigger notifications to be sent to the selected reviewers or any additionally selected or automatically selected users, reviews or colleagues; para 56, Pezaris discloses the suggestions may be made by round-robin, random assignment, authorship, or other rules-based criteria.);
and wherein the auto-suggestion of the owner for the ticket created manually is based on the code analysis of which developer made the last change just before the vulnerability or security issue is introduced (Pezaris, para 56 discloses the use of code authorship at the line level
(authorship of the lines of code impacted by the changes) as the driver for suggesting responsible reviewer as well as developers).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Pezaris’s automated reviewer/assignee suggestion mechanism. Pezaris complements Bhalla/Dixon’ teaching by teaching automatic rule based suggestion/assignment of responsible owner during both manual ticket creation and automated workflow. One would be motivated to perform such modification to Bhalla/Dixon’s system to reduce manual effort, improve accuracy in routing responsibility to the most relevant developer, ensure consistency between automated and manually created tickets to yield efficiency, accountability and operational robustness.
Regarding Claim 31, Bhalla/Dixon teaches the system according to claim 17,
Bhalla, Col 10, lines 15-23 discloses generating tickets comprising features of what the software asset should do. Tickets written in natural language can become the input to task identification in the system, where a natural language processor extracts natural language context from the tickets.
Bhalla/Dixon does not explicitly teach; However, Pezaris teaches:
wherein the vulnerability management platform embedded in the vulnerability management module also provides an option of auto suggest/assignment of owner, while creating a ticket manually (Pezaris, para 77 discloses users are allowed to create/issue tickets; Para 52, Pezaris discloses that the user may be provided with a list of suggested or recommended reviewers 104 to include and assign the code review to. The suggestion may be based on the organizational structure of the development team, original authors of the code, teams assigned to the modified code, developers working on similar features/issues/ticket or any combination thereof.);
and wherein the auto-suggestion of the owner of a ticket is performed, when the ticket is created automatically through the runbook (Pezaris, para 55 discloses automated workflow steps around tickets/issues, when certain action occur (e.g., user request review, or branch/pull-request workflow), the system automatically packages changes and updates task state. The submission of the code review may trigger notifications to be sent to the selected reviewers or any additionally selected or automatically selected users, reviews or colleagues; para 56, Pezaris discloses the suggestions may be made by round-robin, random assignment, authorship, or other rules-based criteria.);
and wherein the auto suggestion of owner for the ticket created manually is based on the code analysis of which developer made the last change just before the vulnerability or security issue is introduced (Pezaris, para 56 discloses the use of code authorship at the line level
(authorship of the lines of code impacted by the changes) as the driver for suggesting responsible reviewer as well as developers).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Pezaris’s automated reviewer/assignee suggestion mechanism. Pezaris complements Bhalla/Dixon’ teaching by teaching automatic rule based suggestion/assignment of responsible owner during both manual ticket creation and automated workflow. One would be motivated to perform such modification to Bhalla/Dixon’s system to reduce manual effort, improve accuracy in routing responsibility to the most relevant developer, ensure consistency between automated and manually created tickets to yield efficiency, accountability and operational robustness.
Regarding Claim 16, Bhalla/Dixon/Pezaris teaches the method of claim 15,
Dixon teaches:
identifying the vulnerability or security issue by the vulnerability management platform, (Dixon, para 45, discloses static code analyzer identifying specific file and line of violation in the source code and generating detailed data for each compliance violation, including date and time of the violation, the type of violation, and the location of the source code containing the violation.; para 37, Dixon discloses various tools like static, coverage and Unit to provide information or context for why a piece of code is flagged (para 56, 63, 69));
and wherein the ticket created comprises the information about the nature of the issue, including a description of the problem and possibly the exact line number or code section where the vulnerability or the security issues is found (Dixon, para 45, discloses static code analyzer identifying specific file and line of violation in the source code and generating detailed data for each compliance violation, including date and time of the violation, the type of violation, and the location of the source code containing the violation.; para 37, Dixon discloses various tools like static, coverage and Unit to provide information or context for why a piece of code is flagged (para 56, 63, 69));
analyzing the code history to trace the changes made to the affected code section over time; and wherein the analyzing the code history involves identifying the last developer or team member who made a code change to the specific line number or code section just before the vulnerability dependency or the security issue is introduced (Dixon, para 49 discloses the version source control system 210 includes a repository containing a complete history of the application's source code, identifying which developer is responsible for each and every modification. The version control system 210 therefore produces code listings that attribute each line of code to the developer that last changed it and assigns each code violation to a member of the development team.);
suggesting the developer as a potential assignee, who made the last code change before the vulnerability is introduced to the ticket (Dixon, para 49 discloses the version control system 210 produces code listings that attribute each line of code to the developer that last changed it and assigns each code violation to a member of the development team.);
and wherein the auto-suggest feature offers the last developer as the initial recommendation (Dixon, para 49 discloses the version source control system 210
includes a repository containing a complete history of the application's source code, identifying which developer is responsible for each and every modification. The version control system 210 therefore produces code listings that attribute each line of code to the developer that last changed it and assigns each code violation to a member of the development team.);
Dixon does not explicitly teach; However, Pezaris teaches:
when a user creates a new ticket to address the security issues (Pezaris, para 77 discloses users are allowed to create/issue tickets);
reviewing the suggested potential assignee and deciding whether to accept auto-suggestion or manual adjustments by the user who created the ticket (Pezaris, para 52 discloses that the user may be provided with a list of suggested or recommended reviewers 104 to include and assign the code review to; Pezaris, para 56 discloses "adjust" regarding the suggestions and determine if and how suggested reviewers are assigned when a code review is requested. The suggestions may be made by round-robin, random assignment, authorship, or other rules-based criteria.);
and wherein the user has the flexibility to override the auto-suggestion and manually assign the ticket to a different developer (Pezaris, para 52 discloses allowing allow a user to add additional reviewers not currently displayed by searching for one or more reviewers; Pezaris, para 56 discloses the suggestions may be made by round-robin, random assignment, authorship, or other rules-based criteria. The selection of authorship may suggest one or more reviewers based on the authorship of the lines of code impacted by the changes, as well as other developers who may have committed to the branch.).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Pezaris’s automated reviewer/assignee suggestion mechanism. Pezaris complements Bhalla/Dixon’ teaching by teaching automatic rule-based suggestion/assignment of responsible owner during both manual ticket creation and automated workflow. One would be motivated to perform such modification to Bhalla/Dixon’s system to reduce manual effort, improve accuracy in routing responsibility to the most relevant developer, ensure consistency between automated and manually created tickets to yield efficiency, accountability and operational robustness.
Regarding Claim 32, Bhalla/Dixon/Pezaris teaches the system according to claim 31,
Dixon teaches:
identifying the vulnerability or security issue by the vulnerability management platform, (Dixon, para 45, discloses static code analyzer identifying specific file and line of violation in the source code and generating detailed data for each compliance violation, including date and time of the violation, the type of violation, and the location of the source code containing the violation.; para 37, Dixon discloses various tools like static, coverage and Unit to provide information or context for why a piece of code is flagged (para 56, 63, 69));
and wherein the ticket created comprises the information about the nature of the issue, including a description of the problem and possibly the exact line number or code section where the vulnerability or the security issues is found (Dixon, para 45, discloses static code analyzer identifying specific file and line of violation in the source code and generating detailed data for each compliance violation, including date and time of the violation, the type of violation, and the location of the source code containing the violation.; para 37, Dixon discloses various tools like static, coverage and Unit to provide information or context for why a piece of code is flagged (para 56, 63, 69));
analyzing the code history to trace the changes made to the affected code section over time; and wherein the analyzing the code history involves identifying the last developer or team member who made a code change to the specific line number or code section just before the vulnerability dependency or the security issue is introduced (Dixon, para 49 discloses the version source control system 210 includes a repository containing a complete history of the application's source code, identifying which developer is responsible for each and every modification. The version control system 210 therefore produces code listings that attribute each line of code to the developer that last changed it and assigns each code violation to a member of the development team.);
suggesting the developer as a potential assignee, who made the last code change before the vulnerability is introduced to the ticket (Dixon, para 49 discloses the version control system 210 produces code listings that attribute each line of code to the developer that last changed it and assigns each code violation to a member of the development team.);
and wherein the auto-suggest feature offers the last developer as the initial recommendation (Dixon, para 49 discloses the version source control system 210
includes a repository containing a complete history of the application's source code, identifying which developer is responsible for each and every modification. The version control system 210 therefore produces code listings that attribute each line of code to the developer that last changed it and assigns each code violation to a member of the development team.);
Dixon does not explicitly teach; However, Pezaris teaches:
when a user creates a new ticket to address the security issues (Pezaris, para 77 discloses users are allowed to create/issue tickets);
reviewing the suggested potential assignee and deciding whether to accept auto-suggestion or manual adjustments by the user who created the ticket (Pezaris, para 52 discloses that the user may be provided with a list of suggested or recommended reviewers 104 to include and assign the code review to; Pezaris, para 56 discloses "adjust" regarding the suggestions and determine if and how suggested reviewers are assigned when a code review is requested. The suggestions may be made by round-robin, random assignment, authorship, or other rules-based criteria.);
and wherein the user has the flexibility to override the auto-suggestion and manually assign the ticket to a different developer (Pezaris, para 52 discloses allowing allow a user to add additional reviewers not currently displayed by searching for one or more reviewers; Pezaris, para 56 discloses the suggestions may be made by round-robin, random assignment, authorship, or other rules-based criteria. The selection of authorship may suggest one or more reviewers based on the authorship of the lines of code impacted by the changes, as well as other developers who may have committed to the branch).
It would have been obvious to one of the ordinary skills in the art before the filing date to modify Bhalla/Dixon’s system to incorporate Pezaris’s automated reviewer/assignee suggestion mechanism. Pezaris complements Bhalla/Dixon’ teaching by teaching automatic rule-based suggestion/assignment of responsible owner during both manual ticket creation and automated workflow. One would be motivated to perform such modification to Bhalla/Dixon’s system to reduce manual effort, improve accuracy in routing responsibility to the most relevant developer, ensure consistency between automated and manually created tickets to yield efficiency, accountability and operational robustness.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIT KHADKA whose telephone number is (703)756-1440. The examiner can normally be reached Monday - Friday, 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L. Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMIT KHADKA/Examiner, Art Unit 2432
/SYED A ZAIDI/Primary Examiner, Art Unit 2432