Prosecution Insights
Last updated: April 19, 2026
Application No. 18/740,438

DATA SECURITY TRANSACTIONS USING SOFTWARE CONTAINER MACHINE READABLE CONFIGURATION DATA

Non-Final OA §103
Filed
Jun 11, 2024
Examiner
MAYE, AYUB A
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Sylabs IP Holdings, LLC
OA Round
5 (Non-Final)
58%
Grant Probability
Moderate
5-6
OA Rounds
5y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
377 granted / 652 resolved
At TC average
Strong +42% interview lift
Without
With
+41.6%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
32 currently pending
Career history
684
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/02/2026 has been entered. Claim Objections Claims 1, 13 and 20 is objected to because of the following informalities: claims 1, 13 and 20 recites the limitation "the second subset of score" in line 31. There is insufficient antecedent basis for this limitation in the claim. It suggested to amend to “the subset of second scores”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-11, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hufsmith et al (2020/0097662) in views of Golan et al (2022/0405397), LeCour (2019/0180049) and Wang et al (2017/0068655). For claim 1, Hufsmith teaches a method (abstract), comprising: receiving in response to a query (examiner notes that query a vulnerability repository with a request for security vulnerabilities then submit queries to the vulnerability repository with request for security vulnerabilities corresponding to these other materials and associate as Hufsmith teaches in par.188 and par.189), an artifact associated with a software container (examiner notes that development teams are constantly updating/creating microservices in containers and deploying them to production as Hufsmith teaches in par.188 and par.158); parsing the artifact to identify source code of the software container (examiner notes that parse source code of a Dockerfile or other domain-specific programming language document by which a container image is specified as Hufsmith teaches in par.163) by invoking a scanning engine configured to generate a score associated with each of one or more components of the software container (Hufsmith teaches that the scanner results, or scanner properties, determined for the container image by each of the various vulnerability scanning engines in view of a context of a given execution environment for the container image. The scanner properties determined by each vulnerability scanning engine are adjusted responsive to properties of the context and normalized to determine component threat scores for the container image as Hufsmith teaches in par.32), the scanning engine also being configured the software container to identify a security threat (Hufsmith teaches scanning engine 12 may be configured to executed a process described below with reference to FIG. 5 to generate a combined threat score for a distributed application or container image, such as with the results engine 54. The scanning image 12 is described with reference to vulnerability scanning, such as security vulnerability scans, but the techniques described may be implemented in accordance with a variety of other types of testing, such as dynamic testing, functional testing, performance testing, and the like, with different types of testing applications invoked for different container images or portions thereof in accordance with the techniques described below as Hufsmith teaches in par.67); invoking the scoring engine configured to run an algorithm (the examiner notes that the scanning engine may be configured to execute a process described below with reference to FIG. 4 to scan container images or distributed applications for vulnerabilities and create score records (e.g., score files or attributes of objects in dynamic memory and the scanning image 12 is described with reference to vulnerability scanning, such as security vulnerability scans, but the techniques described may be implemented in accordance with a variety of other types of testing, such as dynamic testing, functional testing, performance testing, and the like, with different types of testing applications invoked for different container images or portions for the container images or distributed applications as Hufsmith teaches in par.67) to evaluate the source code (part of the scanner for evaluation includes source code as Hufsmith in par.73) to identify one or more components associated with the software container (the examiner notes that engine 54 may include one or more components for evaluating properties associated with a container image which the software container as Hufsmith in par.109), implementing the scoring engine also being configured to generate the score associated with each of the one or more components (the examiner notes that score records, such as score records for distributed applications may be generated for and correspond to composition files in the repository and score records for container images may correspond to container images in the image repository as Hufsmith teaches in par.108), the score being generated by referencing an identifier of each of the one or more components against a library of referenced identifiers (the examiner notes that the system includes identifier for referencing the score in the database as reference point for each components as Hufsmith teaches in par.102, par.127 and par.133), a value of the score being adjusted if the identifier indicates the security tool is invoked by each of the one or more components (the examiner notes that scores may be adjusted based on context properties for an execution environment as Hufsmith teaches in par.147); generating and assigning identifier (par.129) and each of the referenced identifier being associated with at least one security tool (par.66), executing the algorithm until each of the one or more components of the software container has been evaluated and each of the one or more components has been assigned one or more scores (the examiner notes that each of the components are evaluated and assigned scores as Hufsmith teach in par.132 and par.133); executing the algorithm until each of the one or more components of the software container has been evaluated and generating an aggregate score using an aggregate scoring engine, the aggregate score being determined (the examiner notes that the scores, which can be weighted based on context properties, for the different metrics may be aggregated and reported as the combined threat score as Hufsmith teaches in par.90 and par.144). Hufsmith fails to teach a supply chain, a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software and assigned by the scoring engine based on identification of the security tool when the source code parsed and used to perform a lookup operation to compare the identifier to the library of referenced identifiers, using the identifier being configured to identify the supply chain associated with one of the one or more components and whether the security tool is being invoked by one of the one or more components and the identifier also being configured to identify whether the source code associated with the one of the one or more components of the software container has been changed, as a first subset of scores generated by one or more component scoring engines in the scoring engine; and at least the software container has been assigned a subset of second scores by one or more security framework engines in the scoring engine; and the aggregate score being determined using the score and the one or more scores one or more of the first subset of scores and the second subset of scores. Golan teaches, similar system, a supply chain (abstract), the identifier being configured to identify the supply chain associated with one of the one or more components (Golan teaches that components for automatically detecting and identifying events or issues related to supply-chain related security threats to software applications as Golan discloses in par.41 and 83) and whether the security tool is being invoked by one of the one or more components (Golan teaches that the application security service may classify the updated executable 306 as including one or more significant security threat and/or invoke one or more security-related interventions as Golan discloses in par.32 and 61) and the identifier also being configured to identify whether the source code associated with the one of the one or more components of the software container has been changed (Golan teaches that ML risk models to analyze the one or more differences between the updated source code and the previous source code. As described throughout, the one or more ML risk models may be trained to detect patterns in the one or more differences that indicate a likelihood for a potential security threat associated with the differences in the source code as Golan discloses in par.79 and 89). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith to include supply chain and the identifier being configured to identify the supply chain associated with one of the one or more components as taught and suggested by Golan for the purpose of detecting and determining differences between versions of the application using an ML risk model in this way, potential security threats within an application's supply chain, which may include security threats that are able to avoid sandbox detection, are more likely to be detected, thereby resulting in a computer security improvement (Golan, par.96). Hufsmith and Golan does not explicitly teach, however, LeCour teaches similar system, assigned by the scoring engine based on identification of the security tool when the source code parsed (par.111) and used to perform a lookup operation to compare the identifier to the library of referenced identifiers (par.123). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith and Golan to include assigned by the scoring engine based on identification as taught and suggested by LeCour for the purpose of providing determine that a text string in the content matches to a data format specified in entity definitions corresponding to types of personal identifiers and a rule for finding a geographic or linguistic term in the content correlated to the specific type of personal identifier (LeCour, abstract). Hufsmith, Golan, and LeCour, does not explicitly teach a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software, as a first subset of scores generated by one or more component scoring engines in the scoring engine; and at least the software container has been assigned a subset of second scores by one or more security framework engines in the scoring engine; and the aggregate score being determined using the score and the one or more scores one or more of the first subset of scores and the second subset of scores. Wang teaches, similar system, a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software (Wang teaches descriptive text to be parsed, such as HTML (hypertext markup language) or XML (extensible markup language), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler and using the entries of the chart parse, determine a second partition of the input text string having a second score using the entries of the second chart parse, designate the first partition as the selected partition in response to the first score being lower than the second score, and designate the second partition as the selected partition in response to the second score being lower than the first score as Wang teaches in par.19 and 164), as a first subset of scores generated by one or more component scoring engines in the scoring engine (Wang teaches set processing module is configured to assign a score to each record of the consideration set of records. A results generation module configured is to respond to the user device with a subset of the consideration set of records. The subset is selected based on the assigned scores. The subset identifies application states of applications that are relevant to the search query as Wang teaches in apr.18-19); and at least the software container has been assigned a subset of second scores by one or more security framework engines in the scoring engine (Wang teaches that having a second score using the entries of the second chart parse, designating the first partition as the selected partition in response to the first score being lower than the second score, and designating the second partition as the selected partition in response to the second score being lower than the first score as Wang teaches in par.18-19); and the aggregate score being determined using the score and the one or more scores one or more of the first subset of scores and the second subset of scores (par.18-19). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith, Golan, and LeCour to include a scanning engine configured to parse the source code and subset score as taught and suggested by Wang for the purpose of establishment of Measurment of reliability, timeliness, and accuracy may be stored in the search data store 124 and may be used to weight search results obtained from the data sources (Wang, par.50). For claim 2, Hufsmith in view of Golan, LeCour and Wang, further teaches wherein the software container is analyzed when the each of the one or more components is evaluated by the scoring engine (the examiner notes that the results engine does analyze the container as Hufsmith teaches par.107 and par.109). For claim 3, Hufsmith in view of Golan, LeCour and Wang further teaches wherein the artifact comprises the source code or comprises a portion of the source code of the software container, or both (Hufsmith par.161, lines 1-2). For claim 5, Hufsmith in view of Golan, LeCour and Wang, further teaches wherein the artifact comprises a copy of the software container (Hufsmith par.69 and par.160)). For claim 6, Hufsmith in view of Golan, LeCour and Wang, further teaches wherein the artifact comprises a copy of the software container, the copy being stored in a repository in data communication with a platform integrating the scoring engine (Hufsmith par.51 and par.69). For claim 7, Hufsmith in view of Golan, LeCour and Wang, further teaches wherein the scoring engine includes a plurality of scoring engines (Hufsmith par.28). For claim 8, Hufsmith in view of Golan, LeCour and Wang, further teaches wherein the scoring engine includes a plurality of scoring engines, at least one of the scoring engines being configured to evaluate at least one of the one or more components (Hufsmith par.28 and par.133). For claim 9, Hufsmith in view of Golan, LeCour and Wang, further teaches wherein the scoring engine includes a plurality of scoring engines (Hufsmith par.28 and par.133), at least one of the scoring engines being configured to evaluate the source code using a framework (Hufsmith par.62). For claim 10, Hufsmith in view of Golan, LeCour and Wang, further teaches wherein the scoring engine includes a plurality of scoring engines (par.28 and par.133), at least one of the scoring engines being configured to apply a framework when evaluating the source code (Hufsmith par.73). Hufsmith fails to teach the supply chain. Golan teaches a supply chain (abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith, as modified by Golan, LeCour and Anderson, to include supply chain as taught and suggested by Golan for the purpose of detecting and determining differences between versions of the application using an ML risk model in this way, potential security threats within an application's supply chain, which may include security threats that are able to avoid sandbox detection, are more likely to be detected, thereby resulting in a computer security improvement (Golan, par.96). For claim 11, Hufsmith fails to teach wherein the supply chain is associated with one or more of the software container. Golan teaches wherein the supply chain is associated with one or more of the software container (abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith, as modified by Golan, LeCour and Anderson, to include supply chain as taught and suggested by Golan for the purpose of detecting and determining differences between versions of the application using an ML risk model in this way, potential security threats within an application's supply chain, which may include security threats that are able to avoid sandbox detection, are more likely to be detected, thereby resulting in a computer security improvement (Golan, par.96). For claim 13, Hufsmith teaches system (abstract), comprising: a data repository configured to store software container data retrieved in response to a query generated from a platform configured to evaluate a software container (par.59); and a logic module configured to receive in response to a query (examiner notes that query a vulnerability repository with a request for security vulnerabilities then submit queries to the vulnerability repository with request for security vulnerabilities corresponding to these other materials and associate as Hufsmith teaches in, par.188 and par.189), an artifact associated with the software container, to parse the artifact to identify source code of the software container (examiner notes that parse source code of a Dockerfile or other domain-specific programming language document by which a container image is specified as Hufsmith teaches in par.163), by invoking a scanning engine configured to generate a score associated with each of one or more components of the software container (Hufsmith teaches that the scanner results, or scanner properties, determined for the container image by each of the various vulnerability scanning engines in view of a context of a given execution environment for the container image. The scanner properties determined by each vulnerability scanning engine are adjusted responsive to properties of the context and normalized to determine component threat scores for the container image as Hufsmith teaches in par.32), the scanning engine also being configured the software container to identify a security threat (Hufsmith teaches scanning engine 12 may be configured to executed a process described below with reference to FIG. 5 to generate a combined threat score for a distributed application or container image, such as with the results engine 54. The scanning image 12 is described with reference to vulnerability scanning, such as security vulnerability scans, but the techniques described may be implemented in accordance with a variety of other types of testing, such as dynamic testing, functional testing, performance testing, and the like, with different types of testing applications invoked for different container images or portions thereof in accordance with the techniques described below as Hufsmith teaches in par.67); to invoke the scoring engine configured to run an algorithm to evaluate the source code (the examiner notes that the scanning engine may be configured to execute a process described below with reference to FIG. 4 to scan container images or distributed applications for vulnerabilities and create score records (e.g., score files or attributes of objects in dynamic memory and the scanning image 12 is described with reference to vulnerability scanning, such as security vulnerability scans, but the techniques described may be implemented in accordance with a variety of other types of testing, such as dynamic testing, functional testing, performance testing, and the like, with different types of testing applications invoked for different container images or portions for the container images or distributed applications as Hufsmith teaches in par.67) to identify one or more components associated with the software container (the examiner notes that engine 54 may include one or more components for evaluating properties associated with a container image which the software container as Hufsmith in par.109), implement the scoring engine also being configured to generate the score associated with each of the one or more components (the examiner notes that score records, such as score records for distributed applications may be generated for and correspond to composition files in the repository and score records for container images may correspond to container images in the image repository as Hufsmith teaches in par.108), the score being generated by referencing an identifier of each of the one or more components against a library of referenced identifiers (the examiner notes that the system includes identifier for referencing the score in the database as reference point for each components as Hufsmith teaches in par.102, par.127 and par.133), a value of the score being adjusted if the identifier indicates the security tool is invoked by each of the one or more components (the examiner notes that scores may be adjusted based on context properties for an execution environment as Hufsmith teaches in par.147);, generate and assign identifier (par.129) and each of the referenced identifier being associated with at least one security tool (par.66), to execute the algorithm until each of the one or more components of the software container has been evaluated and each of the one or more components has been assigned one or more scores (the examiner notes that each of the components are evaluated and assigned scores as Hufsmith teach in par.132 and par.133);, and to generate an aggregate score using an aggregate scoring engine, the aggregate score being determined (the examiner notes that the scores, which can be weighted based on context properties, for the different metrics may be aggregated and reported as the combined threat score as Hufsmith teaches in par.90 and par.144). Hufsmith fails to teach a supply chain, a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software and assigned by the scoring engine based on identification of the security tool when the source code parsed and used to perform a lookup operation to compare the identifier to the library of referenced identifiers, the identifier being configured to identify the supply chain associated with one of the one or more components and whether the security tool is being invoked by one of the one or more components and the identifier also being configured to identify whether the source code associated with the one of the one or more components of the software container has been changed, as a first subset of scores generated by one or more component scoring engines in the scoring engine; and at least the software container has been assigned a subset of second scores by one or more security framework engines in the scoring engine; and the aggregate score being determined using the score and the one or more scores one or more of the first subset of scores and the second subset of scores. Golan teaches, similar system, a supply chain (abstract), the identifier being configured to identify the supply chain associated with one of the one or more components (Golan teaches that components for automatically detecting and identifying events or issues related to supply-chain related security threats to software applications as Golan discloses in par.41 and 83) and whether the security tool is being invoked by one of the one or more components (Golan teaches that the application security service may classify the updated executable 306 as including one or more significant security threat and/or invoke one or more security-related interventions as Golan discloses in par.32 and 61) and the identifier also being configured to identify whether the source code associated with the one of the one or more components of the software container has been changed (Golan teaches that ML risk models to analyze the one or more differences between the updated source code and the previous source code. As described throughout, the one or more ML risk models may be trained to detect patterns in the one or more differences that indicate a likelihood for a potential security threat associated with the differences in the source code as Golan discloses in par.79 and 89). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith to include supply chain and the identifier being configured to identify the supply chain associated with one of the one or more components as taught and suggested by Golan for the purpose of detecting and determining differences between versions of the application using an ML risk model in this way, potential security threats within an application's supply chain, which may include security threats that are able to avoid sandbox detection, are more likely to be detected, thereby resulting in a computer security improvement (Golan, par.96). Hufsmith and Golan does not explicitly teach, however, LeCour teaches similar system, assigned by the scoring engine based on identification of the security tool when the source code parsed (par.111) and used to perform a lookup operation to compare the identifier to the library of referenced identifiers (par.123). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith to include assigned by the scoring engine based on identification as taught and suggested by LeCour for the purpose of providing determine that a text string in the content matches to a data format specified in entity definitions corresponding to types of personal identifiers and a rule for finding a geographic or linguistic term in the content correlated to the specific type of personal identifier (LeCour, abstract). Hufsmith, Golan, and LeCour, does not explicitly teach a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software. Wang teaches, similar system, a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software (Anderson teaches descriptive text to be parsed, such as HTML (hypertext markup language) or XML (extensible markup language), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler and using the entries of the chart parse, determine a second partition of the input text string having a second score using the entries of the second chart parse, designate the first partition as the selected partition in response to the first score being lower than the second score, and designate the second partition as the selected partition in response to the second score being lower than the first score as Wang teaches in par.19 and 164), as a first subset of scores generated by one or more component scoring engines in the scoring engine (Wang teaches set processing module is configured to assign a score to each record of the consideration set of records. A results generation module configured is to respond to the user device with a subset of the consideration set of records. The subset is selected based on the assigned scores. The subset identifies application states of applications that are relevant to the search query as Wang teaches in apr.18-19); and at least the software container has been assigned a subset of second scores by one or more security framework engines in the scoring engine (Wang teaches that having a second score using the entries of the second chart parse, designating the first partition as the selected partition in response to the first score being lower than the second score, and designating the second partition as the selected partition in response to the second score being lower than the first score as Wang teaches in par.18-19); and the aggregate score being determined using the score and the one or more scores one or more of the first subset of scores and the second subset of scores (par.18-19). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith, Golan, and LeCour to include a scanning engine configured to parse the source code and subset score as taught and suggested by Wang for the purpose of establishment of Measurment of reliability, timeliness, and accuracy may be stored in the search data store 124 and may be used to weight search results obtained from the data sources (Wang, par.50). For claim 14, Hufsmith in view of Golan, LeCour and Wang, further teaches discloses the system of claim 13 above. Hufsmith fails to teach wherein each of the one or more components has another supply chain that is included in the supply chain. Golan teaches wherein each of the one or more components has another supply chain that is included in the supply chain (abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith, In views Golan, LeCour and Anderson, to include supply chain as taught and suggested by Golan for the purpose of detecting and determining differences between versions of the application using an ML risk model in this way, potential security threats within an application's supply chain, which may include security threats that are able to avoid sandbox detection, are more likely to be detected, thereby resulting in a computer security improvement (Golan, par.96). For claim 15, Hufsmith in view of Golan, LeCour and Wang,further teaches discloses the system of claim 13 above. Hufsmith in view of Golan, LeCour and Anderson, further teaches wherein the scoring engine is configured to evaluate the source code using a framework (Hufsmith par.62). For claim 16, Hufsmith in view of Golan, LeCour and Wang, further teaches discloses the system of claim 13 above. Hufsmith in view of Golan, LeCour and Anderson, further teaches wherein the scoring engine is configured to evaluating the software container using a framework (Hufsmith par.62). For claim 17, Hufsmith in view of Golan, LeCour and Wang, further teaches discloses the system of claim 13 above. Hufsmith in view of Golan, LeCour and Anderson, further teaches wherein the score is generated by the scoring engine by executing the algorithm against the data associated with at least one of the one or more components (Hufsmith par.108, par.109 and par.133). For claim 18, Hufsmith in view of Golan, LeCour and Wang, further teaches discloses the system of claim 13 above. Hufsmith in view of Golan, LeCour and Anderson, further teaches wherein the score is generated by the scoring engine by executing the algorithm against the data associated with the software container using a framework (Hufsmith par.62, and par.108). For claim 19, Hufsmith in view of Golan, LeCour and Wang, further teaches discloses the system of claim 13 above. Hufsmith in view of Golan, LeCour and Anderson, further teaches wherein the score is generated by the scoring engine by executing the algorithm against the data associated with the software container, the source code (Hufsmith par.163). Hufsmith fails to teach the supply chain using a framework. Golan teaches the supply chain using a framework (abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith to include supply chain as taught and suggested by Golan for the purpose of detecting and determining differences between versions of the application using an ML risk model in this way, potential security threats within an application's supply chain, which may include security threats that are able to avoid sandbox detection, are more likely to be detected, thereby resulting in a computer security improvement (Golan, par.96). For claim 20, Hufsmith teaches non-transitory computer readable medium having one or more computer program instructions configured to perform a method (par.214), the method comprising: receiving in response to a query (examiner notes that query a vulnerability repository with a request for security vulnerabilities then submit queries to the vulnerability repository with request for security vulnerabilities corresponding to these other materials and associate as Hufsmith teaches in par.188 and par.189), an artifact associated with a software container (examiner notes that development teams are constantly updating/creating microservices in containers and deploying them to production as Hufsmith teaches in par.188 and par.158); parsing the artifact to identify source code of the software container (examiner notes that parse source code of a Dockerfile or other domain-specific programming language document by which a container image is specified as Hufsmith teaches in par.163); by invoking a scanning engine configured to generate a score associated with each of one or more components of the software container (Hufsmith teaches that the scanner results, or scanner properties, determined for the container image by each of the various vulnerability scanning engines in view of a context of a given execution environment for the container image. The scanner properties determined by each vulnerability scanning engine are adjusted responsive to properties of the context and normalized to determine component threat scores for the container image as Hufsmith teaches in par.32), the scanning engine also being configured the software container to identify a security threat (Hufsmith teaches scanning engine 12 may be configured to executed a process described below with reference to FIG. 5 to generate a combined threat score for a distributed application or container image, such as with the results engine 54. The scanning image 12 is described with reference to vulnerability scanning, such as security vulnerability scans, but the techniques described may be implemented in accordance with a variety of other types of testing, such as dynamic testing, functional testing, performance testing, and the like, with different types of testing applications invoked for different container images or portions thereof in accordance with the techniques described below as Hufsmith teaches in par.67); invoking the scoring engine configured to run an algorithm (the examiner notes that the scanning engine may be configured to execute a process described below with reference to FIG. 4 to scan container images or distributed applications for vulnerabilities and create score records (e.g., score files or attributes of objects in dynamic memory and the scanning image 12 is described with reference to vulnerability scanning, such as security vulnerability scans, but the techniques described may be implemented in accordance with a variety of other types of testing, such as dynamic testing, functional testing, performance testing, and the like, with different types of testing applications invoked for different container images or portions for the container images or distributed applications as Hufsmith teaches in par.67) to evaluate the source code (part of the scanner for evaluation includes source code as Hufsmith in par.73) to identify the one or more components associated with the software container (the examiner notes that engine 54 may include one or more components for evaluating properties associated with a container image which the software container as Hufsmith in par.109), implementing the scoring engine also being configured to generate the score associated with each of the one or more components (the examiner notes that score records, such as score records for distributed applications may be generated for and correspond to composition files in the repository and score records for container images may correspond to container images in the image repository as Hufsmith teaches in par.108), the score being generated by referencing an identifier of each of the one or more components against a library of referenced identifiers (the examiner notes that the system includes identifier for referencing the score in the database as reference point for each components as Hufsmith teaches in par.102, par.127 and par.133), a value of the score being adjusted if the identifier indicates the security tool is invoked by each of the one or more components (the examiner notes that scores may be adjusted based on context properties for an execution environment as Hufsmith teaches in par.147); generating and assigning identifier (par.129) and each of the referenced identifier being associated with at least one security tool (par.66), executing the algorithm until each of the one or more components of the software container has been evaluated and each of the one or more components has been assigned one or more scores (the examiner notes that each of the components are evaluated and assigned scores as Hufsmith teach in par.132 and par.133); and generating an aggregate score using an aggregate scoring engine, the aggregate score being determined (the examiner notes that the scores, which can be weighted based on context properties, for the different metrics may be aggregated and reported as the combined threat score as Hufsmith teaches in par.90 and par.144). Hufsmith fails to teach a supply chain, a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software and assigned by the scoring engine based on identification of the security tool when the source code parsed and used to perform a lookup operation to compare the identifier to the library of referenced identifiers, the identifier being configured to identify the supply chain associated with one of the one or more components and whether the security tool is being invoked by one of the one or more components and the identifier also being configured to identify whether the source code associated with the one of the one or more components of the software container has been changed, as a first subset of scores generated by one or more component scoring engines in the scoring engine; and at least the software container has been assigned a subset of second scores by one or more security framework engines in the scoring engine; and the aggregate score being determined using the score and the one or more scores one or more of the first subset of scores and the second subset of scores. Golan teaches, similar system, a supply chain (abstract), the identifier being configured to identify the supply chain associated with one of the one or more components (Golan teaches that components for automatically detecting and identifying events or issues related to supply-chain related security threats to software applications as Golan discloses in par.41 and 83) and whether the security tool is being invoked by one of the one or more components (Golan teaches that the application security service may classify the updated executable 306 as including one or more significant security threat and/or invoke one or more security-related interventions as Golan discloses in par.32 and 61) and the identifier also being configured to identify whether the source code associated with the one of the one or more components of the software container has been changed (Golan teaches that ML risk models to analyze the one or more differences between the updated source code and the previous source code. As described throughout, the one or more ML risk models may be trained to detect patterns in the one or more differences that indicate a likelihood for a potential security threat associated with the differences in the source code as Golan discloses in par.79 and 89). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith to include supply chain and the identifier being configured to identify the supply chain associated with one of the one or more components as taught and suggested by Golan for the purpose of detecting and determining differences between versions of the application using an ML risk model in this way, potential security threats within an application's supply chain, which may include security threats that are able to avoid sandbox detection, are more likely to be detected, thereby resulting in a computer security improvement (Golan, par.96). Hufsmith and Golan does not explicitly teach, however, LeCour teaches similar system, assigned by the scoring engine based on identification of the security tool when the source code parsed (par.111) and used to perform a lookup operation to compare the identifier to the library of referenced identifiers (par.123). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith and Golan to include assigned by the scoring engine based on identification as taught and suggested by LeCour for the purpose of providing determine that a text string in the content matches to a data format specified in entity definitions corresponding to types of personal identifiers and a rule for finding a geographic or linguistic term in the content correlated to the specific type of personal identifier (LeCour, abstract). Hufsmith, Golan, and LeCour, does not explicitly teach a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software. Wang teaches, similar system, a scanning engine configured to parse the source code to identify a security tool usable to generate representative data as input to a scoring engine, the scanning engine also being configured to parse the source code of the software (Anderson teaches descriptive text to be parsed, such as HTML (hypertext markup language) or XML (extensible markup language), (ii) assembly code, (iii) object code generated from source code by a compiler, (iv) source code for execution by an interpreter, (v) source code for compilation and execution by a just-in-time compiler and using the entries of the chart parse, determine a second partition of the input text string having a second score using the entries of the second chart parse, designate the first partition as the selected partition in response to the first score being lower than the second score, and designate the second partition as the selected partition in response to the second score being lower than the first score as Wang teaches in par.19 and 164), as a first subset of scores generated by one or more component scoring engines in the scoring engine (Wang teaches set processing module is configured to assign a score to each record of the consideration set of records. A results generation module configured is to respond to the user device with a subset of the consideration set of records. The subset is selected based on the assigned scores. The subset identifies application states of applications that are relevant to the search query as Wang teaches in apr.18-19); and at least the software container has been assigned a subset of second scores by one or more security framework engines in the scoring engine (Wang teaches that having a second score using the entries of the second chart parse, designating the first partition as the selected partition in response to the first score being lower than the second score, and designating the second partition as the selected partition in response to the second score being lower than the first score as Wang teaches in par.18-19); and the aggregate score being determined using the score and the one or more scores one or more of the first subset of scores and the second subset of scores (par.18-19). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith, Golan, and LeCour to include a scanning engine configured to parse the source code and subset score as taught and suggested by Wang for the purpose of establishment of Measurment of reliability, timeliness, and accuracy may be stored in the search data store 124 and may be used to weight search results obtained from the data sources (Wang, par.50). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hufsmith et al (2020/0097662) in views of Golan et al (2022/0405397), LeCour (2019/0180049) and Wang et al (2017/0068655) as applied to claims above, and further in view of Eriksson et al (WO 2023/227233). Hufsmith, as modified by Golan, LeCour and Wang, teaches all the limitation as previsouly set forth except for container further comprising: generating an attestation tag for the software container based one or more of the first subset of scores, the second subset of scores, and the aggregate score, the attestation tag including data representation an attestation that the supply chain has a level of security constituting a security state; and associating the attestation tag to the software container. Eriksson teaches, similar system, container further comprising: generating an attestation tag for the software container based one or more of the first subset of scores, the second subset of scores, and the aggregate score (abstract) (page.3, lines 3-8), the attestation tag including data representation an attestation that the supply chain has a level of security constituting a security state; and associating the attestation tag to the software container (Eriksson teaches in page.12, lines 5-16). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith, Golan, and LeCour and Wang to include a an attestation tag as taught and suggested by Eriksson for the purpose of monitoring for one or more events or patterns indicating that a container has been instantiated in the runtime environment and, in response to detecting the one or more events or patterns, obtaining the identifier of the container that has been instantiated (Eriksson, page.3, lines 10-14). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hufsmith et al (2020/0097662) in views of Golan et al (2022/0405397), LeCour (2019/0180049) and Wang et al (2017/0068655) as applied to claims above, and further in view of Mai et al (2024/0054232). Hufsmith, as modified by Golan, LeCour and Wang, teaches all the limitation as previsouly set forth except for at least one security framework engine of the one or more security framework engines includes a security framework as a Supply-Chain Levels for Software Artifacts, or "SLSA," that is configured to form the second subset of scores.. Mai teaches, similar system, at least one security framework engine of the one or more security framework engines includes a security framework as a Supply-Chain Levels for Software Artifacts, or "SLSA," that is configured to form the second subset of scores (Mai, par.27). It would have been obvious to one ordinary skill in the art before effective filling date to modify Hufsmith, Golan, and LeCour and Wang to include as a Supply-Chain Levels for Software Artifacts, or "SLSA," as taught and suggested by Eriksson for the purpose of proposing various solutions for each threat. SLSA defines different levels of maturity of the source and build integrity of the software system (Mai, par.1). Response to Amendments/Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The applicant’s arguments regarding new in claims 1, 13 and 20, has been considered but is moot, because the examiner applied new art, Wang et al (2017/0068655), that covers newly claimed limitation. Regarding dependent claims arguments, said arguments are moot because the applied references are not considered to have alleged differences, and therefore are considered to properly show that for which they were cited. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AYUB A MAYE whose telephone number is (571)270-5037. The examiner can normally be reached Monday-Friday 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AYUB A MAYE/Examiner, Art Unit 2436 /SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Jun 11, 2024
Application Filed
Sep 07, 2024
Non-Final Rejection — §103
Dec 11, 2024
Response Filed
Dec 28, 2024
Final Rejection — §103
Mar 03, 2025
Response after Non-Final Action
Apr 03, 2025
Request for Continued Examination
Apr 20, 2025
Response after Non-Final Action
Apr 29, 2025
Non-Final Rejection — §103
Aug 05, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Oct 31, 2025
Response after Non-Final Action
Feb 02, 2026
Request for Continued Examination
Feb 08, 2026
Response after Non-Final Action
Mar 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574211
PERSONAL PRIVATE KEY ENCRYPTION DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12574247
DEVICE FOR COMPUTING SOLUTIONS OF LINEAR SYSTEMS AND ITS APPLICATION TO DIGITAL SIGNATURE GENERATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12547740
INFORMATION PROCESSING DEVICES AND INFORMATION PROCESSING METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12526274
Geolocated Portable Authenticator for Transparent and Enhanced Information-Security Authentication of Users
2y 5m to grant Granted Jan 13, 2026
Patent 12373573
Vulnerability Processing Method, Apparatus and Device, and Computer-readable Storage Medium
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+41.6%)
5y 2m
Median Time to Grant
High
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month