DETAILED ACTION
Status of Claims
This communication is a first action on the merits. Claims 1-20, as originally filed, are pending and have been considered as follows.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 19-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to non-statutory subject matter. Regarding independent claim 19, the broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 US.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 Us. C. § 101, Aug. 24, 2009; p. 2. As per claim 20, the dependent claim does not correct the above deficiencies and are, likewise, rejected as being directed to non-statutory subject matter. Therefore, the instant claims are directed to non-statutory subject matter and are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Orun (US 2022/0237202 A1, herein Orun).
As per claim 1, Orun teaches of a computing system, comprising:
a memory;
a processing system comprising one or more processors, wherein the processing system is in communication with the memory and configured to (pg. 6, [0061-0062] which describes the electronic device including a set of one or more processors couples to one or more machine-readable storage media):
generate a data model comprising one or more data sources, one or more data use cases, and one or more data governance policies retrieved from one or more of a plurality of data platforms via one or more of a plurality of platform and vendor agnostic application programming interfaces (APIs), wherein the one or more data sources, the one or more data use cases, the one or more data governance policies, and the one or more platform and vendor agnostic APIs are aligned to one or more data domains (abstract and pg. 1, [0001] which describes the systems, methods, and computer-readable media for data catalogs, metadata repositories, data discovery, and data governance, in particular for a canonical model-driven active metadata exchange for distributed data catalogues, where the CDM maintains a single set of user case agnostic mappings between data sources and the CDM; pg. 2, [0021] which describes an application independent metadata repository with a canonical data model that maintains a single set of use case agnostic mappings between data sources and the CDM);
create, based on information from the one or more data sources, a data linkage between one of the data sources, one of the data use cases, one of the data governance policies, and one of the data domains, wherein the data linkage is enforced by the platform and vendor agnostic API, wherein the data use case is monitored and controlled by a data use case owner, and wherein the data domain is monitored and controlled by a data domain executive (abstract which describes how the CDM maintains a single set of user case agnostic mappings between data sources and the CDM; and pg. 2, [0010] which describes how many MDM systems also attempt to provide a set of linkages, which involves identifying records across the enterprise’s DBs that are related to the same entity and identifying individual elements, e.g. client devices or platforms, that tend to update records pertaining to the same entity; and pg. 4, [0037] which describes how the data architect logs into the data manager for master data management that may bring data from different systems of record and references; and pg. 5, [0038] which describes how the data architect uses the data manager to browse the data catalog, look up, search for, or otherwise query for a canonical entity unique identifier for the selected object and to submit a distributed query against respective metadata repositories);
determine, based on the one or more data governance policies and quality criteria set forth by the data use case owner and the data domain executive, the level of quality of at least one of the one or more data sources (pg. 1, [0009] which describes how data catalogs are or include metadata repositories that provide listings of data elements/objects that are of interest to an enterprise, including compliance operations, the apps/servers and/or databases that use the data elements/objects; and pg. 2, [0014] which describes how the analytics environment may support descriptive analytics, which encompasses operational reporting, diagnostic analyses, and correlation analyses, as well integrated analytics and/or feature extraction and text analytics; and pg. 5, [0040] which describes how the data manager retrieves data source information from the data sources and retrieves data source information from connected apps to compare to data sources it has registered to identify what has not been registered or mapped; and pg. 5, [0048] which describes how the data manager includes a user interface that indicates potential value or priority/ranking values to assist usability); and
generate, based on the level of quality of the data source, a report indicating the status of the data domain and data use case (pg. 2, [0015] which describes data analysis which involves turning the data into information and insights and may include the app servers and/or analytics environments generating data science/analytics reports and/or visualizations; and pg. 5, [0048] which describes how the data manager provides visualizations of the available mapped data items in one or more other data sources and other unmapped mapped data items).
As per claim 10, it refers to a method for performing the above steps. It recites limitations already addressed by claim 1 above, and is therefore rejected under the same art and rationale. Furthermore, Orun (US 2022/0237202 A1, herein Orun) discloses the steps are performed by the systems, methods, and computer-readable media for data catalogs, metadata repositories, data discovery, and data governance, in particular for a canonical model-driven active metadata exchange for distributed data catalogues (abstract and pg. 1, [0001]).
As per claim 19, it refers to a computer readable medium for performing the above steps. It recites limitations already addressed by claim 1 above, and is therefore rejected under the same art and rationale. Furthermore, Orun (US 2022/0237202 A1, herein Orun) discloses the steps are performed by an electronic device including a set of one or more processors couples to one or more machine-readable storage media (pg. 6, [0061-0062]).
As per claim 2, Orun discloses all the elements of claim 1, and further teaches wherein the platform and vendor agnostic APIs are configured to ingest data comprising a plurality of data structure formats (pg. 2, [0011] which describes the data catalog that performs various cataloging functions such as data ingestion, refinement, analysis, and provide access to the data assets, where data ingestion involves bringing data into an analytics ecosystem from the data sources; and pg. 2, [0021] which describes an application independent metadata repository with a canonical data model that maintains a single set of use case agnostic mappings between data sources and the CDM).
As per claim 11, it refers to the method of claim 10 used for performing the above steps. It recites limitations already addressed by claim 2 above, and is therefore rejected under the same art and rationale.
As per claim 3, Orun discloses all the elements of claim 1, and further teaches wherein the one or more data use cases include one or more of a regulatory use case, a risk use case, or an operational use case deployed on one or more of a data reporting platform, a data analytics platform, or a data modeling platform (abstract and pg. 1, [0001] which describes the systems, methods, and computer-readable media for data catalogs, metadata repositories, data discovery, and data governance, in particular for a canonical model-driven active metadata exchange for distributed data catalogues, where the CDM maintains a single set of user case agnostic mappings between data sources and the CDM; and pg. 2, [0016] which describes how ML automation may be used to provide data discovery recommendations to dashboard and/or performance monitor to help data consumers discover data that can best assist them in reaching desired goals; and pg. 13, [0108] which describes a platform for business-as-a-service that brings together data about a customer from a variety of sources, building a single view of that customer).
As per claim 12, it refers to the method of claim 10 used for performing the above steps. It recites limitations already addressed by claim 3 above, and is therefore rejected under the same art and rationale.
As per claim 4, Orun discloses all the elements of claim 1, and further teaches wherein the one or more data governance policies include one or more of data risks, data controls, or data issues retrieved from risk systems (abstract and pg. 1, [0001] which describes the systems, methods, and computer-readable media for data catalogs, metadata repositories, data discovery, and data governance, in particular for a canonical model-driven active metadata exchange for distributed data catalogues, where the CDM maintains a single set of user case agnostic mappings between data sources and the CDM; and pg. 2, [0017] which describes the data governance manager capturing a list of what business information is found in an app; and pg. 15, [0123-0124] which describes how computing environments store for each entity multiple database objects and/or records, each of which can have associated privacy and data governance characteristics and parameters, as well as data classifications that are associated with fields of database objects and used to determine data permissions, data uses, privacy requirements, access rights, data governance, etc., where data classifications may include one or more of public, internal, confidential, restricted, and/or mission critical).
As per claim 13, it refers to the method of claim 10 used for performing the above steps. It recites limitations already addressed by claim 4 above, and is therefore rejected under the same art and rationale.
As per claim 5, Orun discloses all the elements of claim 1, and further teaches wherein the one or more data domains are defined in accordance with enterprise-established guidelines (pg. 2, [0013] which describes how the analytics ecosystem or analytics platform is an environment that contains various apps/tools and/or services to create, build, and collaborate around various data and may be hosted by one or more analytics environment servers and may provide custom dashboards and data visualization tools that allow data consumers to manipulate data with predictive models for different micro and macro-level scenarios; and pg. 6, [0058] which describes another implementation that includes an API defining functions, methods, variables, data structures, and/or protocols for the instructions for any of the aforementioned example implementations; and pg. 11, [0092] which describes the API that refers to interfaces for software components to communicate with each other and/or developer tool(s) that allows for systems to talk to each other, where the API(s) are source code specification(s) or a collection of libraries, routines, methods, data structures, fields, objects, classes, variables, remote calls, and the like that defines how a software element may access or interact with the underlying platform capabilities and features).
As per claim 14, it refers to the method of claim 10 used for performing the above steps. It recites limitations already addressed by claim 5 above, and is therefore rejected under the same art and rationale.
As per claim 6, Orun discloses all the elements of claim 5, and further teaches wherein each data domain comprises a sub-domain (pg. 8, [0067] which describes how the electronic devices implementing the clients and the data catalog/canonical data model service(s) would be communicatively coupled and would establish between them (or through one or more other layers and/or other services) connections for submitting selections of queries for data sources and/or object mappings, and requests to update/add new data sources to the data catalog/canonical data model service; and pg. 8, [0068] which describes how the set of one or more processors typically execute software to instantiate a virtualization layer and one or more software containers running on top of an operating system and allows for the creation of multiple software containers that may each be used to execute a set of one or more applications).
As per claim 15, it refers to the method of claim 14 used for performing the above steps. It recites limitations already addressed by claim 6 above, and is therefore rejected under the same art and rationale.
As per claim 7, Orun discloses all the elements of claim 1, and further teaches wherein to create the data linkage, the processing system is further configured to:
identify, based on one or more data attributes, each of the one or more data sources;
determine the necessary data controls for each of the one or more data sources; and
map each of the one or more data sources to one or more of the one or more data use cases, the one or more data governance policies, or the one or more data domains (abstract and pg. 1, [0001] which describes the systems, methods, and computer-readable media for data catalogs, metadata repositories, data discovery, and data governance, in particular for a canonical model-driven active metadata exchange for distributed data catalogues, where the CDM maintains a single set of user case agnostic mappings between data sources and the CDM; and pg. 2, [0010] which describes how many MDM systems also attempt to provide a set of linkages, which involves identifying records across the enterprise’s DBs that are related to the same entity and identifying individual elements, e.g. client devices or platforms, that tend to update records pertaining to the same entity; and pg. 2, [0015] which describes data analysis which involves turning the data into information and insights and may include the app servers and/or analytics environments generating data science/analytics reports and/or visualizations, including the data sources that have been already mapped to the analytics environment, the data sources that are available in the customer data platform, what data sources have been mapped, what data sources have been to my master data management solution; and pg. 3, [0020-0021] which describes the providing of one data source mapping across all apps/services, as well as including an application independent metadata repository with a CDM that maintains a single set of use case agnostic mappings between data sources and the CDM, where technical names or developer derived names of different data structures/objects are mapped to the CDM, which is system and app/service agnostic to enable effective metadata discovery, where the data catalog can drive data computation processes, such as how a data score specific value should map and be standardized using an enterprise-defined standard; and pg. 4, [0032-0033] which describes how the CIM is organized into various components including subject area, entity groups, entities, and attributes, where attributes are a unique characteristic of an entity; and pg. 4, [0037] which describes how the data architect logs into the data manager for master data management that may bring data from different systems of record and references; and pg. 5, [0038] which describes how the data architect uses the data manager to browse the data catalog, look up, search for, or otherwise query for a canonical entity unique identifier for the selected object and to submit a distributed query against respective metadata repositories; and pg. 5, [0043-0044] which describes how the data manager performs a look up of source objects mapped to the selected object in the identified apps/services to retrieve a list of data items that have been mapped to CDM attributes of the selected objects, where the data manager may include a user interface allowing the end user to choose to view what data sources and/or data items may be new; and pg. 15, [0123-0124] which describes how computing environments store for each entity multiple database objects and/or records, each of which can have associated privacy and data governance characteristics and parameters, as well as data classifications that are associated with fields of database objects and used to determine data permissions, data uses, privacy requirements, access rights, data governance, etc., where data classifications may include one or more of public, internal, confidential, restricted, and/or mission critical).
As per claim 16, it refers to the method of claim 10 used for performing the above steps. It recites limitations already addressed by claim 7 above, and is therefore rejected under the same art and rationale.
As per claim 20, it refers to the computer readable medium of claim 19 used for performing the above steps. It recites limitations already addressed by claim 7 above, and is therefore rejected under the same art and rationale.
As per claim 8, Orun discloses all the elements of claim 7, and further teaches wherein the processing system is further configured to:
grant access to the data use case owner to the data controls for one or more of the one or more data sources, wherein the one or more data sources are mapped to the data use case that is monitored and controlled by the data use case owner; and
receive data indicating that the data use case owner has verified the data controls for the one or more data sources (pg. 3, [0022] which describes how the CDM is a database object or other data structure that comprises one or more authenticated registered data source identifier, that is a unique identifier assigned to, or otherwise associated with an individual data source; and pg. 5, [0047] which describes how the data manager determines whether the selected new data source can be connected with the data manager, where the data manager compares authentication and authorization settings of the integration app to determine if it can be securely connected in the data manager, where the data manager may obtain the appropriate security settings and/or credentials to add the new data source; and pg. 10, [0078] which describes the system that includes a set of one or more servers that are running on server electronic devices and that are configured to handle requests for any authorized user associated with any tenant, where the user devices communicate with the servers of the system to request and update tenant-level data and system-level data; and pg. 15, [0123-0124] which describes how computing environments store for each entity multiple database objects and/or records, each of which can have associated privacy and data governance characteristics and parameters, as well as data classifications that are associated with fields of database objects and used to determine data permissions, data uses, privacy requirements, access rights, data governance, etc., where data classifications may include one or more of public, internal, confidential, restricted, and/or mission critical).
As per claim 17, it refers to the method of claim 16 used for performing the above steps. It recites limitations already addressed by claim 8 above, and is therefore rejected under the same art and rationale.
As per claim 9, Orun discloses all the elements of claim 1, and further teaches wherein the generated report further indicates one or more of the number of data sources determined to have the necessary level of quality, the number of data sources approved by the data domain executive, or the number of use cases using data sources approved by the data domain executive (pg. 1, [0009] which describes how data catalogs are or include metadata repositories that provide listings of data elements/objects that are of interest to an enterprise, including compliance operations, the apps/servers and/or databases that use the data elements/objects; and pg. 2, [0014] which describes how the analytics environment may support descriptive analytics, which encompasses operational reporting, diagnostic analyses, and correlation analyses, as well integrated analytics and/or feature extraction and text analytics; and pg. 2, [0015] which describes data analysis which involves turning the data into information and insights and may include the app servers and/or analytics environments generating data science/analytics reports and/or visualizations; and pg. 5, [0040] which describes how the data manager retrieves data source information from the data sources and retrieves data source information from connected apps to compare to data sources it has registered to identify what has not been registered or mapped; and pg. 5, [0048] which describes how the data manager provides visualizations of the available mapped data items in one or more other data sources and other unmapped mapped data items, as well as how the data manager includes a user interface that indicates potential value or priority/ranking values to assist usability).
As per claim 18, it refers to the method of claim 10 used for performing the above steps. It recites limitations already addressed by claim 9 above, and is therefore rejected under the same art and rationale.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Zagoudis (US 2015/0169595 A1) teaches of information governance platform.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ASHLEY Y YOUNG whose telephone number is (571)270-5294. The examiner can normally be reached Mondays, Tuesdays, and Thursdays, 9:00a-3:00p, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Beth Boswell can be reached at (571) 272-6737. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ASHLEY Y YOUNG/Examiner, Art Unit 3625
/BETH V BOSWELL/Supervisory Patent Examiner, Art Unit 3625