Prosecution Insights
Last updated: April 19, 2026
Application No. 18/343,408

DISCOVERING, ASSESSING, AND REMEDIATING CLOUD NATIVE APPLICATION RISKS DUE TO SECURITY MISCONFIGURATIONS

Final Rejection §103
Filed
Jun 28, 2023
Examiner
MAYE, AYUB A
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Accenture Global Solutions Limited
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
5y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
377 granted / 652 resolved
At TC average
Strong +42% interview lift
Without
With
+41.6%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
32 currently pending
Career history
684
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-11, 13-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Stojanovic et al (2018/0052870) in views of Franchitti (2019/0171438) and Giles (2023/0141524). For claim 1, Stojanovic teaches a method (abstract), comprising: receiving, by a device, cloud application data associated with a cloud application, data source identifiers, a knowledge model schema, and a data classification ontology (Stojanovic teaches that cloud service provided within a cloud-based computing environment; and can serve as a single point of control for the design, simulation, deployment, development, operation, and analysis of data for use with software applications; including enabling data input from one or more sources of data (for example, in an embodiment, an input HUB); providing a graphical user interface that enables a user to specify an application for the data, where the system can perform an ontology analysis of a schema definition, to determine the types of data, and datasets or entities, associated with that schema; and generate, or update, a model from a reference schema that includes an ontology defined based on relationships between datasets or entities, and their attributes as Stojanovic teaches in abstract and par.7 and 79); generating, by the device, a knowledge model based on the knowledge model schema, and the data classification ontology (Stojanovic teaches that determine the types of data, and datasets or entities, associated with that schema; and generate, or update, a model from a reference schema that includes an ontology defined based on relationships between datasets or entities, and their attributes. A reference HUB including one or more schemas can be used to analyze data flows, and further classify or make recommendations such as, for example, transformations enrichments, filtering, or cross-entity data fusion of an input data as Stojanovic teaches in par.72); performing, by the device, a dynamic flow analysis of the cloud application data and the data source identifiers to generate a data flow graph that depicts a flow of data to services from the data source (Stojanovic teaches that action that can be performed by dataflow applications (e.g., pipelines, Lambda applications) on a dataset or entity within a HUB, for projection onto another entity as Stojanovic teaches in abstract, par.7 and 93); processing, by the device, the data flow graph, with the knowledge model, to determine attributes in the data flow graph (Stojanovic teaches that Data flows can be decomposed into a model describing transformations of data, predicates, and business rules applied to the data, and attributes used in the data flows as Stojanovic teaches in par.71 and 72); identifying, by the device, one or more data sources that include the attributes (Stojanovic teaches that the system can provide support for auto-mapping of complex data structures, datasets or entities, between one or more sources or targets of data, referred to herein in some embodiments as HUBs. The auto-mapping can be driven by a metadata, schema, and statistical profiling of a dataset; and used to map a source dataset or entity associated with an input HUB, to a target dataset or entity or vice versa, to produce an output data prepared in a format or organization (projection) for use with one or more output HUBs as Stojanovic teaches in par.69); identifying, by the device, assets based on the data flow graph and the one or more data sources (Stojanovic teaches that data AI system 150, can provide one or more services for processing and transforming data such as, for example, business data, consumer data, and enterprise data, including the use of machine learning processing, for use with a variety of computational assets such as, for example, databases, cloud data warehouses, storage systems, or storage services as Stojanovic teaches in par.127); processing, by the device, the one or more data sources and the assets, with a machine learning model (Stojanovic teaches that data AI system 150, can provide one or more services for processing and transforming data such as, for example, business data, consumer data, and enterprise data, including the use of machine learning processing, for use with a variety of computational assets such as, for example, databases, cloud data warehouses, storage systems, or storage services as Stojanovic teaches in par.127 and 128). Stojanovic teaches cloud application data associated with a cloud application, data source identifiers, knowledge model schema, and a data classification ontology but does not explicitly teach the data residency constraints, so Stojanovic fails to explicitly teach that receiving, by a device, cloud application data associated with a cloud application, data source identifiers, knowledge model schema, data residency constraints, and a data classification ontology, generating, by the device, a knowledge model based on the knowledge model schema, the data residency constraints, and the data classification ontology, determine sensitive attributes, identifying, by the device, one or more sensitive data sources that include the sensitive attributes, identifying, by the device, sensitive assets, processing, by the device, the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets; utilizing, by the device, the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations; generating, by the device, remediation actions to correct the misconfigurations based on the severities of the misconfigurations; and modifying, by the device, the cloud application based on the remediation actions to generate a compliant cloud application. Franchitti teaches, similar system, receiving, by a device, cloud application data associated with a cloud application, data source identifiers, knowledge model schema, data residency constraints, and a data classification ontology (Franchitti teaches that e support of experts to seed the creation and management of the descriptive ontologies and taxonomies that it uses to capture and qualify the business problems and best practice reusable business solutions that are stored in its knowledge base. When acquiring knowledge from experts through its divisions and practices, Archemy™ librarians uses a tacit knowledge acquisition process that follows the protocol-based and knowledge modeling techniques described above very closely. Archemy™ librarians use hierarchy-generation techniques and the Protégé and ArchNav™ tool to model captured knowledge as ontologies and related taxonomies. The focus of knowledge capture is typically restricted to confined areas of larger business domains (e.g., subsets of contract law within the legal business field, symptoms set that pertain to specific diseases in healthcare). The premise is to create assemblies of business domain ontologies and related taxonomies that are focused on the business “dialects” that are used in specific business areas where everyone “speaks the same language as Franchitti teaches in par.130 and 369), generating, by the device, a knowledge model based on the knowledge model schema, the data residency constraints, and the data classification ontology ((Franchitti teaches that a system includes a networked arrangement of compute nodes and is configured to analyze any of a plurality of subsets of elements, components, terms, values, strings, words, or criteria of a “taxonomy” (also referred to herein as a repository) that is stored within the system. A taxonomy can include one or more fields, some or each of which can be associated with a “facet” defining one of: an idea, a concept, an artifact, a component, a procedure, or a skill. Facets can include, but are not limited to: business capabilities, customer profiles, operational location characteristics, application infrastructure, data/information infrastructure, and technology infrastructure. Facets can optionally be associated with an architecture domain (e.g., a business domain, an application domain, a data domain, or a technology domain) as Franchitti teaches in par.130 and 369). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include data residency constraints as taught and suggested by Franchitti for the purpose of provide access to, reusable components of business solutions, and/or metrics and provide access to, catalogued definitions of business problems that, in some cases, are linked to solutions (e.g., the reusable software components) in the repository, for example to assist businesses in identifying faster, and/or reacting faster to, real world problems and system capability needs (Giles teaches in par.15). Stojanovic, as modified by Franchitti, does not explicitly teach determine sensitive attributes, identifying, by the device, one or more sensitive data sources that include the sensitive attributes, identifying, by the device, sensitive assets, processing, by the device, the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets; utilizing, by the device, the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations; generating, by the device, remediation actions to correct the misconfigurations based on the severities of the misconfigurations; and modifying, by the device, the cloud application based on the remediation actions to generate a compliant cloud application. Giles teaches, similar system, determine sensitive attributes (Giles teaches datalogging event when the sensitive information should be masked in the datalog as Giles teaches in par.37), identifying, by the device, one or more sensitive data sources that include the sensitive attributes (Giles teaches that wherein the at least one first compliance error comprises a datalog generated by the first software instance, the datalog comprising sensitive information as Giles teaches in par.74 and 65), identifying, by the device, sensitive assets (par.65), processing, by the device, the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets (Giles teaches trigger monitoring system may detect this compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the data log to terms or entries categorized as sensitive in the compliance repository as Giles teaches in par.65); utilizing, by the device, the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations (Giles teaches that Trigger monitoring system may detect this compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the datalog to terms or entries categorized as sensitive in the compliance repository. Accordingly, the trigger data may include the compliance error of “exposing customer social security numbers” when nine digit numbers in the form of XXX-XX-XXXX are detected within the data log as Giles teaches in par.65); generating, by the device, remediation actions to correct the misconfigurations based on the severities of the misconfigurations (Giles teaches compliance remediation system 110 may utilize a rules-based platform (e.g., algorithmic rules stored on compliance repository 140) and/or trained a machine learning model to remediate one or more software compliance errors associated with a software configuration file as Giles teaches par.25 and 65); and modifying, by the device, the cloud application based on the remediation actions to generate a compliant cloud application (Giles teaches trigger monitoring system 120 may transmit a notification to compliance remediation system 110 to update the software dependency A to a compliant version, for example, by automatically updating software dependency A to a new software version as Giles teaches par.65 and 66). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic, as modified by Franchitti, to include misconfigurations in the sensitive data sources and the sensitive assets and remediation actions as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claims 2 and 16, Stojanovic, as modified by Franchitti and Giles, fails to teach causing the compliant cloud application to be deployed in a cloud computing environment. Giles further teaches that causing the compliant cloud application to be deployed in a cloud computing environment (Giles abstract and par.65). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include compliant cloud application as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claim 3, Stojanovic, as modified by Franchitti and Giles, further teaches that wherein the cloud application data includes data identifying an architecture flow of the cloud application, a process flow of the cloud application, and a control flow of the cloud application (Stojanovic, par.7 and 79). For claim 4, Stojanovic, as modified by Franchitti and Giles, further teaches wherein the data source identifiers include details of data stored in repositories of the cloud application (Stojanovic, par.102). For claim 5, Stojanovic, as modified by Franchitti and Giles, fails to teach wherein the data residency constraints include categories of data identified based on data characteristics, industry domain, and security constraints to be utilized for identifying information as confidential or private in a data source. Franchitti further teaches that wherein the data residency constraints include categories of data identified based on data characteristics, industry domain, and security constraints to be utilized for identifying information as confidential or private in a data source (par.130). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include data residency constraints as taught and suggested by Franchitti for the purpose of provide access to, reusable components of business solutions, and/or metrics and provide access to, catalogued definitions of business problems that, in some cases, are linked to solutions (e.g., the reusable software components) in the repository, for example to assist businesses in identifying faster, and/or reacting faster to, real world problems and system capability needs (Giles teaches in par.15). For claim 6, Stojanovic, as modified by Franchitti and Giles, fails to teach wherein the data classification ontology includes an ontology of associated confidential or private data fields for security practices. Giles further teaches that wherein the data classification ontology includes an ontology of associated confidential or private data fields for security practices (Giles abstract and par.65). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include private data as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claims 7 and 17, Stojanovic, as modified by Franchitti and Giles, further teaches wherein performing the dynamic flow analysis of the cloud application data and the data source identifiers to generate the data flow graph (Stojanovic, par.7) comprises: performing a dynamic analysis of a flow of data through application programming interfaces, database connection points, and calls to other services by the cloud application to generate the data flow graph (Stojanovic teaches in par.73). For claim 8, Stojanovic teaches a device (abstract), comprising: one or more memories (par.154); and one or more processors, coupled to the one or more memories (par.154), configured to: receive cloud application data associated with a cloud application, data source identifiers, a knowledge model schema, and a data classification ontology (Stojanovic teaches that cloud service provided within a cloud-based computing environment; and can serve as a single point of control for the design, simulation, deployment, development, operation, and analysis of data for use with software applications; including enabling data input from one or more sources of data (for example, in an embodiment, an input HUB); providing a graphical user interface that enables a user to specify an application for the data, where the system can perform an ontology analysis of a schema definition, to determine the types of data, and datasets or entities, associated with that schema; and generate, or update, a model from a reference schema that includes an ontology defined based on relationships between datasets or entities, and their attributes as Stojanovic teaches in abstract and par.7 and 79); generate a knowledge model based on the knowledge model schema, the and the data classification ontology (Stojanovic teaches that determine the types of data, and datasets or entities, associated with that schema; and generate, or update, a model from a reference schema that includes an ontology defined based on relationships between datasets or entities, and their attributes. A reference HUB including one or more schemas can be used to analyze data flows, and further classify or make recommendations such as, for example, transformations enrichments, filtering, or cross-entity data fusion of an input data as Stojanovic teaches in par.72); perform a dynamic flow analysis of the cloud application data and the data source identifiers to generate a data flow graph that depicts a flow of data to services from the data source (Stojanovic teaches that action that can be performed by dataflow applications (e.g., pipelines, Lambda applications) on a dataset or entity within a HUB, for projection onto another entity as Stojanovic teaches in abstract, par.7 and 93); process the data flow graph, with the knowledge model, to determine attributes in the data flow graph (Stojanovic teaches that Data flows can be decomposed into a model describing transformations of data, predicates, and business rules applied to the data, and attributes used in the data flows as Stojanovic teaches in par.71 and 72);; identify one or more data sources that include the attributes (Stojanovic teaches that the system can provide support for auto-mapping of complex data structures, datasets or entities, between one or more sources or targets of data, referred to herein in some embodiments as HUBs. The auto-mapping can be driven by a metadata, schema, and statistical profiling of a dataset; and used to map a source dataset or entity associated with an input HUB, to a target dataset or entity or vice versa, to produce an output data prepared in a format or organization (projection) for use with one or more output HUBs as Stojanovic teaches in par.69); identify sensitive assets based on the data flow graph and the one or more sensitive data sources (Stojanovic teaches that data AI system 150, can provide one or more services for processing and transforming data such as, for example, business data, consumer data, and enterprise data, including the use of machine learning processing, for use with a variety of computational assets such as, for example, databases, cloud data warehouses, storage systems, or storage services as Stojanovic teaches in par.127); process the one or more data sources and the assets, with a machine learning model (Stojanovic teaches that data AI system 150, can provide one or more services for processing and transforming data such as, for example, business data, consumer data, and enterprise data, including the use of machine learning processing, for use with a variety of computational assets such as, for example, databases, cloud data warehouses, storage systems, or storage services as Stojanovic teaches in par.127 and 128) and cause the compliant cloud application to be deployed in a cloud computing environment (Stojanovic teaches that cloud service provided within a cloud-based computing environment; and can serve as a single point of control for the design, simulation, deployment, development, operation, and analysis of data for use with software applications; including enabling data input from one or more sources of data as Stojanovic teaches in abstract and par.7 and 79). Stojanovic teaches cloud application data associated with a cloud application, data source identifiers, knowledge model schema, and a data classification ontology but does not explicitly teach the data residency constraints, so Stojanovic fails to explicitly teach that receive cloud application data associated with a cloud application, data source identifiers, knowledge model schema, data residency constraints, and a data classification ontology, generate a knowledge model based on the knowledge model schema, the data residency constraints, and the data classification ontology, determine sensitive attributes, identify one or more sensitive data sources that include the sensitive attributes, identify sensitive assets, process the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets; utilizing the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations; generate remediation actions to correct the misconfigurations based on the severities of the misconfigurations; modify the cloud application based on the remediation actions to generate a compliant cloud application. Franchitti teaches, similar system, receive cloud application data associated with a cloud application, data source identifiers, knowledge model schema, data residency constraints, and a data classification ontology (Franchitti teaches that e support of experts to seed the creation and management of the descriptive ontologies and taxonomies that it uses to capture and qualify the business problems and best practice reusable business solutions that are stored in its knowledge base. When acquiring knowledge from experts through its divisions and practices, Archemy™ librarians uses a tacit knowledge acquisition process that follows the protocol-based and knowledge modeling techniques described above very closely. Archemy™ librarians use hierarchy-generation techniques and the Protégé and ArchNav™ tool to model captured knowledge as ontologies and related taxonomies. The focus of knowledge capture is typically restricted to confined areas of larger business domains (e.g., subsets of contract law within the legal business field, symptoms set that pertain to specific diseases in healthcare). The premise is to create assemblies of business domain ontologies and related taxonomies that are focused on the business “dialects” that are used in specific business areas where everyone “speaks the same language as Franchitti teaches in par.130 and 369), generate a knowledge model based on the knowledge model schema, the data residency constraints, and the data classification ontology ((Franchitti teaches that a system includes a networked arrangement of compute nodes and is configured to analyze any of a plurality of subsets of elements, components, terms, values, strings, words, or criteria of a “taxonomy” (also referred to herein as a repository) that is stored within the system. A taxonomy can include one or more fields, some or each of which can be associated with a “facet” defining one of: an idea, a concept, an artifact, a component, a procedure, or a skill. Facets can include, but are not limited to: business capabilities, customer profiles, operational location characteristics, application infrastructure, data/information infrastructure, and technology infrastructure. Facets can optionally be associated with an architecture domain (e.g., a business domain, an application domain, a data domain, or a technology domain) as Franchitti teaches in par.130 and 369). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include data residency constraints as taught and suggested by Franchitti for the purpose of provide access to, reusable components of business solutions, and/or metrics and provide access to, catalogued definitions of business problems that, in some cases, are linked to solutions (e.g., the reusable software components) in the repository, for example to assist businesses in identifying faster, and/or reacting faster to, real world problems and system capability needs (Giles teaches in par.15). Stojanovic, as modified by Franchitti, does not explicitly teach determine sensitive attributes, identify one or more sensitive data sources that include the sensitive attributes, identify sensitive assets, process the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets; utilize the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations; generate remediation actions to correct the misconfigurations based on the severities of the misconfigurations; modify the cloud application based on the remediation actions to generate a compliant cloud application. Giles teaches, similar system, determine sensitive attributes (Giles teaches datalogging event when the sensitive information should be masked in the datalog as Giles teaches in par.37), identify one or more sensitive data sources that include the sensitive attributes (Giles teaches that wherein the at least one first compliance error comprises a datalog generated by the first software instance, the datalog comprising sensitive information as Giles teaches in par.74 and 65), identify sensitive assets (par.65), process the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets (Giles teaches trigger monitoring system may detect this compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the data log to terms or entries categorized as sensitive in the compliance repository as Giles teaches in par.65); utilize the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations (Giles teaches that Trigger monitoring system may detect this compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the datalog to terms or entries categorized as sensitive in the compliance repository. Accordingly, the trigger data may include the compliance error of “exposing customer social security numbers” when nine digit numbers in the form of XXX-XX-XXXX are detected within the data log as Giles teaches in par.65); generate remediation actions to correct the misconfigurations based on the severities of the misconfigurations (Giles teaches compliance remediation system 110 may utilize a rules-based platform (e.g., algorithmic rules stored on compliance repository 140) and/or trained a machine learning model to remediate one or more software compliance errors associated with a software configuration file as Giles teaches par.25 and 65); and modify the cloud application based on the remediation actions to generate a compliant cloud application; and cause the compliant cloud application to be deployed in a cloud computing environment (Giles teaches trigger monitoring system 120 may transmit a notification to compliance remediation system 110 to update the software dependency A to a compliant version, for example, by automatically updating software dependency A to a new software version as Giles teaches par.65 and 66). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic, as modified by Franchitti, to include misconfigurations in the sensitive data sources and the sensitive assets and remediation actions as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claim 9, Stojanovic, as modified by Franchitti and Giles, further teaches wherein the one or more processors, to identify the assets based on the data flow graph and the one or more data sources (Stojanovic, par.70 and 75), are configured to: identify the assets of a microservice of the cloud application that handles information (Stojanovic, par.70 and 79). Stojanovic fails to teach that sensitive data. Giles further teaches that Stojanovic (par.65). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include Stojanovic as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claims 10 and 18, Stojanovic, as modified by Franchitti and Giles, further teaches wherein the one or more processors, to process the one or more data sources and the assets, with the machine learning model, to determine the methods, are configured to: process the one or more data sources, the assets and security practices, with the machine learning model, to determine the methods ((Stojanovic, par.7). Stojanovic fails to teach that sensitive data. Giles further teaches that Stojanovic (par.65). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include Stojanovic as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claim 11, Stojanovic, as modified by Franchitti and Giles, fails to teach wherein the machine learning model is a pattern matching model. Giles further teaches that wherein the machine learning model is a pattern matching model (Giles, par.25 and par.41). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include pattern matching model as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claims 13 and 20, Stojanovic, as modified by Franchitti and Giles, fails to teach wherein the one or more processors, to generate the remediation actions to correct the misconfigurations, are configured to: group the misconfigurations based on occurrence of a particular sensitive asset; generate potential remediation actions based on grouping the misconfigurations; and identify, as the remediation actions, a subset of the potential remediation actions based on least number of modifications required to correct the misconfigurations. Giles further teaches that wherein the one or more processors, to generate the remediation actions to correct the misconfigurations (Giles, abstract), are configured to: group the misconfigurations based on occurrence of a particular sensitive asset (Giles, par.65); generate potential remediation actions based on grouping the misconfigurations (Giles, par.65); and identify, as the remediation actions, a subset of the potential remediation actions based on least number of modifications required to correct the misconfigurations (Giles, par.69). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include generate potential remediation actions as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claim 14, Stojanovic, as modified by Franchitti and Giles, fails to teach wherein the one or more processors, to modify the cloud application based on the remediation actions to generate the compliant cloud application, are configured to: incorporate the remediation actions in the cloud application to reconfigure the cloud application and generate the compliant cloud application. Giles further teaches that wherein the one or more processors, to modify the cloud application based on the remediation actions to generate the compliant cloud application, are configured to: incorporate the remediation actions in the cloud application to reconfigure the cloud application and generate the compliant cloud application (Giles, par.65 and 66). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include generate potential remediation actions as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). For claim 15, Stojanovic teaches A non-transitory computer-readable medium storing a set of instructions (par.512), the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device (par.512), cause the device to: receive cloud application data associated with a cloud application, data source identifiers, a knowledge model schema, and a data classification ontology (Stojanovic teaches that cloud service provided within a cloud-based computing environment; and can serve as a single point of control for the design, simulation, deployment, development, operation, and analysis of data for use with software applications; including enabling data input from one or more sources of data (for example, in an embodiment, an input HUB); providing a graphical user interface that enables a user to specify an application for the data, where the system can perform an ontology analysis of a schema definition, to determine the types of data, and datasets or entities, associated with that schema; and generate, or update, a model from a reference schema that includes an ontology defined based on relationships between datasets or entities, and their attributes as Stojanovic teaches in abstract and par.7 and 79); wherein the cloud application data includes data identifying an architecture flow of the cloud application, a process flow of the cloud application, and a control flow of the cloud application (Stojanovic teaches that cloud service provided within a cloud-based computing environment; and can serve as a single point of control for the design, simulation, deployment, development, operation, and analysis of data for use with software applications; including enabling data input from one or more sources of data (for example, in an embodiment, an input HUB); providing a graphical user interface that enables a user to specify an application for the data, where the system can perform an ontology analysis of a schema definition, to determine the types of data, and datasets or entities, associated with that schema; and generate, or update, a model from a reference schema that includes an ontology defined based on relationships between datasets or entities, and their attributes as Stojanovic teaches in abstract and par.7 and 79); generate a knowledge model based on the knowledge model schema, and the data classification ontology (Stojanovic teaches that determine the types of data, and datasets or entities, associated with that schema; and generate, or update, a model from a reference schema that includes an ontology defined based on relationships between datasets or entities, and their attributes. A reference HUB including one or more schemas can be used to analyze data flows, and further classify or make recommendations such as, for example, transformations enrichments, filtering, or cross-entity data fusion of an input data as Stojanovic teaches in par.72); perform a dynamic flow analysis of the cloud application data and the data source identifiers to generate a data flow graph that depicts a flow of data to services from the data source (Stojanovic teaches that action that can be performed by dataflow applications (e.g., pipelines, Lambda applications) on a dataset or entity within a HUB, for projection onto another entity as Stojanovic teaches in abstract, par.7 and 93); process the data flow graph, with the knowledge model, to determine attributes in the data flow graph (Stojanovic teaches that Data flows can be decomposed into a model describing transformations of data, predicates, and business rules applied to the data, and attributes used in the data flows as Stojanovic teaches in par.71 and 72); identify one or more data sources that include the sensitive attributes process the data flow graph, with the knowledge model, to determine attributes in the data flow graph (Stojanovic teaches that Data flows can be decomposed into a model describing transformations of data, predicates, and business rules applied to the data, and attributes used in the data flows as Stojanovic teaches in par.71 and 72);; identify one or more data sources that include the attributes (Stojanovic teaches that the system can provide support for auto-mapping of complex data structures, datasets or entities, between one or more sources or targets of data, referred to herein in some embodiments as HUBs. The auto-mapping can be driven by a metadata, schema, and statistical profiling of a dataset; and used to map a source dataset or entity associated with an input HUB, to a target dataset or entity or vice versa, to produce an output data prepared in a format or organization (projection) for use with one or more output HUBs as Stojanovic teaches in par.69); identify sensitive assets based on the data flow graph and the one or more sensitive data sources (Stojanovic teaches that data AI system 150, can provide one or more services for processing and transforming data such as, for example, business data, consumer data, and enterprise data, including the use of machine learning processing, for use with a variety of computational assets such as, for example, databases, cloud data warehouses, storage systems, or storage services as Stojanovic teaches in par.127); process the one or more data sources and the assets, with a machine learning model (Stojanovic teaches that data AI system 150, can provide one or more services for processing and transforming data such as, for example, business data, consumer data, and enterprise data, including the use of machine learning processing, for use with a variety of computational assets such as, for example, databases, cloud data warehouses, storage systems, or storage services as Stojanovic teaches in par.127 and 128). Stojanovic teaches cloud application data associated with a cloud application, data source identifiers, knowledge model schema, and a data classification ontology but does not explicitly teach the data residency constraints, so Stojanovic fails to explicitly teach that receive cloud application data associated with a cloud application, data source identifiers, knowledge model schema, data residency constraints, and a data classification ontology, generate a knowledge model based on the knowledge model schema, the data residency constraints, and the data classification ontology, determine sensitive attributes, identify one or more sensitive data sources that include the sensitive attributes, identify sensitive assets, process the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets; utilizing the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations; generate remediation actions to correct the misconfigurations based on the severities of the misconfigurations; modify the cloud application based on the remediation actions to generate a compliant cloud application. Franchitti teaches, similar system, receive cloud application data associated with a cloud application, data source identifiers, knowledge model schema, data residency constraints, and a data classification ontology (Franchitti teaches that e support of experts to seed the creation and management of the descriptive ontologies and taxonomies that it uses to capture and qualify the business problems and best practice reusable business solutions that are stored in its knowledge base. When acquiring knowledge from experts through its divisions and practices, Archemy™ librarians uses a tacit knowledge acquisition process that follows the protocol-based and knowledge modeling techniques described above very closely. Archemy™ librarians use hierarchy-generation techniques and the Protégé and ArchNav™ tool to model captured knowledge as ontologies and related taxonomies. The focus of knowledge capture is typically restricted to confined areas of larger business domains (e.g., subsets of contract law within the legal business field, symptoms set that pertain to specific diseases in healthcare). The premise is to create assemblies of business domain ontologies and related taxonomies that are focused on the business “dialects” that are used in specific business areas where everyone “speaks the same language as Franchitti teaches in par.130 and 369), generate a knowledge model based on the knowledge model schema, the data residency constraints, and the data classification ontology ((Franchitti teaches that a system includes a networked arrangement of compute nodes and is configured to analyze any of a plurality of subsets of elements, components, terms, values, strings, words, or criteria of a “taxonomy” (also referred to herein as a repository) that is stored within the system. A taxonomy can include one or more fields, some or each of which can be associated with a “facet” defining one of: an idea, a concept, an artifact, a component, a procedure, or a skill. Facets can include, but are not limited to: business capabilities, customer profiles, operational location characteristics, application infrastructure, data/information infrastructure, and technology infrastructure. Facets can optionally be associated with an architecture domain (e.g., a business domain, an application domain, a data domain, or a technology domain) as Franchitti teaches in par.130 and 369). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic to include data residency constraints as taught and suggested by Franchitti for the purpose of provide access to, reusable components of business solutions, and/or metrics and provide access to, catalogued definitions of business problems that, in some cases, are linked to solutions (e.g., the reusable software components) in the repository, for example to assist businesses in identifying faster, and/or reacting faster to, real world problems and system capability needs (Giles teaches in par.15). Stojanovic, as modified by Franchitti, does not explicitly teach determine sensitive attributes, identify one or more sensitive data sources that include the sensitive attributes, identify sensitive assets, process the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets; utilize the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations; generate remediation actions to correct the misconfigurations based on the severities of the misconfigurations; modify the cloud application based on the remediation actions to generate a compliant cloud application. Giles teaches, similar system, determine sensitive attributes (Giles teaches datalogging event when the sensitive information should be masked in the datalog as Giles teaches in par.37), identify one or more sensitive data sources that include the sensitive attributes (Giles teaches that wherein the at least one first compliance error comprises a datalog generated by the first software instance, the datalog comprising sensitive information as Giles teaches in par.74 and 65), identify sensitive assets (par.65), process the one or more sensitive data sources and the sensitive assets with a machine learning model, to determine methods for identifying misconfigurations in the sensitive data sources and the sensitive assets (Giles teaches trigger monitoring system may detect this compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the data log to terms or entries categorized as sensitive in the compliance repository as Giles teaches in par.65); utilize the methods to identify misconfigurations in the one or more sensitive data sources and the sensitive assets and severities of the misconfigurations (Giles teaches that Trigger monitoring system may detect this compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the datalog to terms or entries categorized as sensitive in the compliance repository. Accordingly, the trigger data may include the compliance error of “exposing customer social security numbers” when nine digit numbers in the form of XXX-XX-XXXX are detected within the data log as Giles teaches in par.65); generate remediation actions to correct the misconfigurations based on the severities of the misconfigurations (Giles teaches compliance remediation system 110 may utilize a rules-based platform (e.g., algorithmic rules stored on compliance repository 140) and/or trained a machine learning model to remediate one or more software compliance errors associated with a software configuration file as Giles teaches par.25 and 65); and modify the cloud application based on the remediation actions to generate a compliant cloud application (Giles teaches trigger monitoring system 120 may transmit a notification to compliance remediation system 110 to update the software dependency A to a compliant version, for example, by automatically updating software dependency A to a new software version as Giles teaches par.65 and 66). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic, as modified by Franchitti, to include misconfigurations in the sensitive data sources and the sensitive assets and remediation actions as taught and suggested by Giles for the purpose of avoiding the use of excess computing processing power and network traffic volume by limiting the generation of new configuration policies to those cases when a pre-existing software configuration policy cannot remediate the software compliance error (Giles teaches in par.15). Claims 12 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Stojanovic et al (2018/0052870) in views of Franchitti (2019/0171438) and Giles (2023/0141524) as applied to claims above, and further in view of Shahul Hameed et al (20230275912). For claims 12 and 19, Stojanovic, as modified by Franchitti and Giles, teaches all the limitations as previously set forth except for wherein the one or more processors, to utilize the methods to identify the misconfigurations in the one or more sensitive data sources and the sensitive assets and the severities of the misconfigurations, are configured to: generate an incident bipartite graph based on the methods, the sensitive assets, and the one or more sensitive data sources; and identify the misconfigurations in the one or more sensitive data sources and the sensitive assets and the severities of the misconfigurations based on the incident bipartite graph. Shahul Hameed teaches, similar system, wherein the one or more processors, to utilize the methods to identify the misconfigurations in the one or more sensitive data sources and the sensitive assets and the severities of the misconfigurations (Shahul Hameed, abstract), are configured to: generate an incident bipartite graph based on the methods, the sensitive assets, and the one or more sensitive data sources (Shahul Hameed, par.7); and identify the misconfigurations in the one or more sensitive data sources and the sensitive assets and the severities of the misconfigurations based on the incident bipartite graph (Shahul Hameed teaches that a graph-based incident analysis tool 138 can be used to selectively retrieve (sub-)sets of the data records, e.g., all records stored for an organization that fall within a timeframe associated with a detected security incident, to generate a graph representation of the retrieved data, cluster the graph representation, and rank the resulting clusters, e.g., based on the number of associated alerts or IoCs, to home in on one or more clusters of alerts 132 and network activity 134 most relevant to the analyzed incident as Shahul Hameed teaches, par.13). It would have been obvious to one ordinary skill in the art before effective filling date to modify Stojanovic, as modified by Franchitti and Giles, to include the incident bipartite graph as taught and suggested by Shahul Hameed for the purpose of generating a multipartite graph that represents entities within or connected to the computer network (herein collectively “network entities”)-such as machines within the computer network, processes spawned by the machines, and external network destinations connected to by the processes--as different types of nodes and the relations between them as edges between the nodes (Shahul Hameed teaches in par.7). Response to Amendments/Arguments Applicant's arguments filed 02/25/2026 have been fully considered but they are not persuasive. With respect to the applicant’s arguments in pages 2-3, regarding amendment limitation in claim 1, that Applicant respectfully traverses this rejection and submits that Stojanovic in para 127 as relied upon in the Office action is reproduced below: [a] system, e.g., data AI system 150, can provide one or more services for processing and transforming data such as, for example, business data, consumer data, and enterprise data, including the use of machine learning processing, for use with a variety of computational assets such as, for example, databases, cloud data warehouses, storage systems, or storage services, Stojanovic merely teaches about a data AI system that transforms data by using machine learning processing. However, the amended claim 1 determines methods to identify the misconfiguration in the sensitive data sources by processing sensitive data sources with a machine learning model. Therefore, Applicant respectfully submits that claim 1 is inventive with respect to Stojanovic.” However, examiner respectfully disagrees with applicant because Stojanovic teaches that computational environment that enables the design, creation, monitoring, and management of software applications (for example, a dataflow application, pipeline, or Lambda application), including the use of an, e.g., data AI subsystem, that provides machine learning capabilities, and Stojanovic further teaches data AI system including the use of machine learning processing, for use with a variety of computational assets such as, for example, databases, cloud data warehouses, storage systems, or storage services. However, Stojanovic does not explicitly teach that to identify the misconfiguration in the sensitive data sources by processing sensitive data sources with a machine learning model. The secondary reference, Giles, teaches that of monitoring system may detect this compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the datalog to terms or entries categorized as sensitive in the compliance repository and compliance remediation system 110 may utilize a rules-based platform (e.g., algorithmic rules stored on compliance repository 140) and/or trained a machine learning model to remediate one or more software compliance errors associated with a software configuration file as Giles teaches in par.25 and 65, therefore, the combination of using the machine learning system that design, creation, monitoring, and management of software applications of Stojanovic to include compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the datalog to terms or entries categorized as sensitive in the compliance of Giles meets the claim limitation. With respect to applicant’s arguments in page 4 that Giles discloses the use of machine learning model to remediate the compliance error. In contrast, the claimed subject matter does not employ a machine learning model to perform remediation. Instead, in Applicant's independent claims e,g., claim 1 recite that, the machine learning model is used to determine a method for identifying a misconfiguration, after which the misconfiguration is remediated by generation of a remediation action based on severities of the misconfigurations. Because Giles teaches remediation by a machine learning model, a person of ordinary skill in the art would not be motivated to modify Giles to instead use machine learning solely for identification while performing remediation through severity-based action generation. Doing so would require a fundamental redesign of Giles' remediation architecture. However, examiner respectfully disagrees with applicant because The secondary reference, Giles, teaches that of monitoring system may detect this compliance error in response to scanning the data log files generated by the software instance and comparing the contents of the datalog to terms or entries categorized as sensitive in the compliance repository and compliance remediation system 110 may utilize a rules-based platform (e.g., algorithmic rules stored on compliance repository 140) and/or trained a machine learning model to remediate one or more software compliance errors associated with a software configuration file, so Giles using the monitoring system, which machine learning system, to identify compliance errors, which is misconfiguration, associated with a software configuration file as Giles teaches in par.25 and 65, therefore, the combination of using the machine learning system that design, creation, monitoring, and management of software applications of Stojanovic to include of identifying compliance error, which is misconfiguration, in response to scanning the data log files generated by the software instance and comparing the contents of the datalog to terms or entries categorized as sensitive in the compliance of Giles meets the claim limitation. Regarding dependent claims arguments, said arguments are moot because the applied references are not considered to have alleged differences, and therefore are considered to properly show that for which they were cited. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AYUB A MAYE whose telephone number is (571)270-5037. The examiner can normally be reached Monday-Friday 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AYUB A MAYE/Examiner, Art Unit 2436 /SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Jun 28, 2023
Application Filed
Dec 21, 2025
Non-Final Rejection — §103
Feb 25, 2026
Response Filed
Mar 16, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574211
PERSONAL PRIVATE KEY ENCRYPTION DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12574247
DEVICE FOR COMPUTING SOLUTIONS OF LINEAR SYSTEMS AND ITS APPLICATION TO DIGITAL SIGNATURE GENERATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12547740
INFORMATION PROCESSING DEVICES AND INFORMATION PROCESSING METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12526274
Geolocated Portable Authenticator for Transparent and Enhanced Information-Security Authentication of Users
2y 5m to grant Granted Jan 13, 2026
Patent 12373573
Vulnerability Processing Method, Apparatus and Device, and Computer-readable Storage Medium
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+41.6%)
5y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month