DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 12/16/2025 have been fully considered
35 USC § 102 & 35 USC § 103:
Regarding Applicant’s Argument - Examiner’s response: - Applicant’s arguments with respect to the rejection(s) of under 35 USC § 102/103 have been fully considered and upon further consideration, a new ground(s) of rejection is made in view of US 20190228014 A1; LIU; Wenjie et al. (hereinafter Liu). The examiner recommends further elaborating on "mapping file" in the independent claims. The examiner believes amendments directed towards parameters/factors involved in the mapping file will help push over the current prior art and push the application towards allowance. If the applicant would like further guidance for overcoming the prior art(s), please call the examiner at 571-272-5212.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-10 and 16-30 are rejected under 35 U.S.C. 103 as being unpatentable over US 20110208663 A1; Kennis; Peter H. et al. (hereinafter Kennis) in view of US 20240256541 A1; Gladwin; S. Christopher et al. (hereinafter Gladwin) and US 20190228014 A1; LIU; Wenjie et al. (hereinafter Liu).
Regarding claim 1, Kennis teaches A computer-program product tangibly embodied in a non-transitory machine-readable storage medium, the computer-program product including instructions operable to cause one or more processor devices to: (Kennis [0121] Application: a computer program that operates on a computer system [0185] the inventions will be described in the general context of computer-executable instructions, such as program modules, being executed by computers in networked environments. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. [263-265] further elaborates on software/computer readable methods) access a mapping file for a relational database, the relational database comprising a plurality of tables, the plurality of tables comprising a plurality of data elements, wherein the mapping file comprises a plurality of predefined join operations that each define a join between two tables of the plurality of tables and a corresponding join type… (Kennis [FIG.1] shows the system accessing mapping files for the relational database and the corresponding join operations which are linked [270] file 1500, provided as a means for extracting information from certain types of data [0290] A process Mapper comprises steps for opening a connection to the staging database 155, opening a connection to the monitoring database 175, reading a corresponding mapper file to obtain information required for the data renaming and transformation (which may include base mappings provided with the system as well as custom mapping resulting from a customization operation to a base mapping), and validating the mappings to ensure consistency with the ontology. Then for each table in the monitoring database that is to receive transformed/mapped data, steps are taken to query/join source staging tables [295] the mapping file contains information needed to identify the source tables, entity names, target tables, and other required information. For example, the mapping file 2200 includes entity definitions delimited by entity tags <entity>, with associated information identifying a data source <source></source>, table names <name>, any database join operations that might be required to obtain the required data from multiple tables <join></join>...a mapping of the corresponding filed name and table) and wherein the mapping file further comprises classification information that associates each of the plurality of data elements with a corresponding element class of a plurality of element classes; (Kennis [0032] a mapping file that identifies specific tables and column in a schema of a data source, and corresponding specific tables and columns in a schema of the monitoring database. [0202] A knowledge base 165 stores information required by the extractor 140 (extraction data in the form of extractor files), information required by the mapper 150 (mapping data in the form of mapping files and ontology files), and a plurality of computer-executable policy statements or frames 167 that constitutes the rules and/or logic for determining exceptions. [0296] FIG. 23 illustrates an exemplary ontology ...with the mapping file ... file includes information identifying an entity that is a monitoring entity in the monitoring database, shown as <entity></entity>. This entity includes certain data items such as a name <name></name>, a title <title></title>, a description <description></description>, an identifier <typeid></typeid>, and a linkage to one or more related entities <linkage></linkage>. The ontology file also includes one or more field identifiers <field></field> that specify data fields of records in the monitoring database; each of these fields has a corresponding name, description, and type, as identified with corresponding tags.) receive an input indicative of selection of a plurality of target data elements from the plurality of data elements, the plurality of target data elements comprising two or more target data elements from at least two different tables of the plurality of tables, wherein the classification information of the mapping file associates the two or more target data elements with a common element class of the plurality of element classes; (Kennis [259] a target or destination table identifier in the staging database <staging_table>, one or more key fields <key_field> that identify keys to table(s) that are to be extracted, and one or more data fields <field> that identify particular data fields that are to be extracted. If desired, filters and queries can be embedded into the file to filter or retrieve particular data items. [0293] The mapper is responsive to an ontology source table 2111 to select only the needed data items from the source database in table 2101 and store those items as identified by the field names shown in the target table...[294] the selected subset of information from the source table 2101, and as was previously described, predetermined metadata is added to each entity created and stored, e.g. Revision_ID, Entity_ID, Entity_Version, Actor_ID, Update_Time. It will be further understood that the mapper 150 is responsive to a stored ontology mapping predetermined source table data fields to predetermined target table data fields [295] identify the source tables, entity names, target tables, and other required information. For example, the mapping file 2200 includes entity definitions delimited by entity tags <entity>, with associated information identifying a data source <source></source>, table names <name>, any database join operations that might be required to obtain the required data from multiple tables <join></join>, key fields that might be required <key></key>, field names within tables <field></field>, and a mapping of the corresponding filed name and table, e.g. a field with the name VENDOR_ID obtains its data from the source table and field VAB.ABALPH, as shown in the figure....[FIG.1] shows a corresponding visual of the system ) dynamically select a set of predefined join operations from the plurality of predefined join operations of the mapping file, wherein the set of predefined join operations are selected dynamically in response to receiving the input indicative of the selection of the plurality of target data elements, and wherein the set of predefined join operations comprises… (Kennis [FIG.21&22] show corresponding mapping file with predefined operations [259] (b) determine what specific data from that data source is to be obtained, i.e. what particular fields from what particular tables, and (c) where that data is to be stored or cached in the staging database. Thus, the exemplary extractor file 900 includes parameters or tags for a description of the extraction <description>, identification of the data source <extractor_name>, a source table identifier in the data source <source_table>, a target or destination table identifier in the staging database <staging_table>, one or more key fields <key_field> that identify keys to table(s) that are to be extracted, and one or more data fields <field> that identify particular data fields that are to be extracted. If desired, filters and queries can be embedded into the file to filter or retrieve particular data items [0290] ... mapper file to obtain information required for the data renaming and transformation, and validating the mappings to ensure consistency with the ontology. Then for each table in the monitoring database that is to receive transformed/mapped data, steps are taken to query/join source staging tables to retrieve particular data items or fields, perform any necessary table[295] identify the source tables, entity names, target tables, and other required information. For example, the mapping file 2200 includes entity definitions delimited by entity tags <entity>, with associated information identifying a data source <source></source>, table names <name>, any database join operations that might be required to obtain the required data from multiple tables <join></join>, key fields that might be required <key></key>, field names within tables <field></field>, and a mapping of the corresponding filed name and table, e.g. a field with the name VENDOR_ID obtains its data from the source table and field VAB.ABALPH, as shown in the figure...[FIG.1] shows a corresponding visual of the system ) generate a unit of software instructions that, when executed, implements the set of predefined join operations; execute the unit of software instructions to retrieve the plurality of target data elements; and output a structured data object comprising the plurality of target data elements. (Kennis [FIG.1] shows implementing predefined join operations for retrieving target data and outputting corresponding data [0121] Application: a computer program that operates on a computer system [185]the inventions will be described in the general context of computer-executable instructions, such as program modules, being executed by computers in networked environments. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. [263-265] further elaborates on software/computer readable methods) Kennis lacks explicitly and orderly teaching define a join between two tables of the plurality of tables and a corresponding join type of plurality of join types; classification information that associates each of the plurality of data elements with a corresponding element class of a plurality of element classes; However Gladwin teaches define a join between two tables of the plurality of tables and a corresponding join type of plurality of join types (Gladwin[0381] In some embodiments, the projection step includes retrieving values 2708.2.2, 2708.5.2, and 2708.Z.2 based on performing a JOIN operation, such as an inner join operation and/or other type of join operation. The JOIN operation can be performed upon a first table corresponding to the filtered record subset 2567 and upon a second table corresponding to the full set of values 2708 stored in secondary storage system 2508 for the dataset 2500. In particular, an equality condition corresponding to equality of the one or more values of the unique identifier field set 2565 and/or other set of fields of the first table with values of a set of corresponding one or more fields of the second table can be utilized to perform the JOIN operation. Output of the JOIN operation thus corresponds to only ones of the set of values 2708 stored in secondary storage system 2508 storing metadata values for the unique identifier field set 2565 and/or other set of fields that match the values of the unique identifier field set 2565 and/or other set of fields for at least one sub-record in the filtered record subset 2567, corresponding to only ones of the set of values 2708 from the same original records 2522 as the sub-records in the filtered record subset 2567. In some embodiments, this JOIN operation is performed in performing projection step 2546 based on being indicated in the query plan data 2554 and/or being included in a query operator execution flow determined for the query.[1082] The additional operators 2520 can be applied in generating further processed filtered row set data 4146, where the records in filtered row set are further processed accordingly to render generation of further processed filtered row set data 4146. For example, the further processed filtered row set data 4146 is generated based on request processing module performing the additional operators 2520 upon filtered row set 3146 and/or otherwise executing the additional operators in conjunction with generating filtered row set 3146. The additional operators 2520 can include one or more: join operators (e.g. outer join, inner join, left join, right join, etc.), aggregator operators (e.g. summation, average, max, min, etc.), blocking operators, set operators (e.g. set intersection, set union, set difference), machine learning operators (e.g. to train a machine learning model and/or apply a machine learning model to generate inference data), linear algebra operators, non-relational operators, any SQL operators, any custom operators, and/or other operators.) classification information that associates each of the plurality of data elements with a corresponding element class of a plurality of element classes; (Gladwin [0148] FIG. 22 illustrates an example of a segment structure for a segment of the segment group. The segment structure for a segment includes the data & parity section, a manifest section, one or more index sections, and a statistics section. The segment structure represents a storage mapping of the data (e.g., data slabs and parity data) of a segment and associated data (e.g., metadata, statistics, key column(s), etc.) regarding the data of the segment.[0184] The designated computing device sends a segment to each computing device in the storage cluster, including itself. Each of the computing devices stores their segment of the segment group. As an example, five segments 29 of a segment group are stored by five computing devices of storage cluster 35-1. The first computing device 18-1-1 stores a first segment of the segment group; a second computing device 18-2-1 stores a second segment of the segment group; and so on. With the segments stored, the computing devices are able to process queries (e.g., query components from the Q&R sub-system 13) and produce appropriate result components.[0855] In some embodiments, the configuration data 3310 can include/indicate formatting data 3351. For example, the formatting data can indicate formatting identifiers and/or information regarding formatting data denoting how various records are arranged in and/or extractable from various objects. As another example, the formatting data can indicate file type/file extensions and/or information regarding file types of different objects storing records. As another example, the formatting data can indicate schemas and/or information regarding how various records are compressed, encoded, encrypted, etc. in respective objects. As another example, the formatting data can indicate a mapping of tags to respective formatting, where the tags implemented via object tagging, and are thus included in object metadata of respective objects having the respective formatting. The formatting data can optionally include a mapping of different objects 2562 to respective formatting data (e.g. file type of different objects, how the data is compressed/encoded/encrypted, and/or information regarding how their data is extractable). Different objects can be formatted differently to store respective sets of records/portions of one or more records in a different arrangement/structuring. In some embodiments, the formatting data 3351 can indicate/be based on object type data 3234 of different objects (e.g. the object type data 3234 of each object accessed FIG. 31A is determined based on the configuration data 3310, and the correct type-based read module 3235 is selected for accessing each relevant object based on the configuration data 3310). [FIG.25B-G] shows corresponding visual of the classification) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Gladwin in order create a more efficient system via data organization methods (Gladwin [0006] As is further known, a computer may effectively extend its CPU by using “cloud computing” to perform one or more computing functions (e.g., a service, an application, an algorithm, an arithmetic logic function, etc.) on behalf of the computer. Further, for large services, applications, and/or functions, cloud computing may be performed by multiple cloud computing resources in a distributed manner to improve the response time for completion of the service, application, and/or function.[0161] The parallelized data input sub-system 11 also generates storage instructions regarding how sub-system 12 is to store the restructured data segments for efficient processing [201] improved independency, improved data storage, improved data retrieval, and/or improved data processing than the computing device OS.) the combination lack explicitly and orderly teaching wherein the set of predefined join operations comprises one of a plurality of candidate sets of predefined join operations to retrieve the plurality of target data elements, and wherein the set of predefined join operations comprises a fewest number of predefined join operations of the plurality of candidate sets of predefined join operations; However Liu teaches wherein the set of predefined join operations comprises one of a plurality of candidate sets of predefined join operations to retrieve the plurality of target data elements, and wherein the set of predefined join operations comprises a fewest number of predefined join operations of the plurality of candidate sets of predefined join operations; (Liu [0009] It can be learned, from the foregoing process, that the fields in the table whose fields participating in the theta join operation and used in the query statement meet the first preset condition are decomposed to obtain a plurality of first field groups, so that the theta join operation is implemented in steps in a form of the plurality of field groups. This can reduce a data amount of Cartesian product calculation during one join operation, greatly reduce network transmission overheads, computing overheads, and memory overheads, and improve execution efficiency.[0019] It can be learned from the foregoing process that when it is determined that the fields in the first subtype field group meet the second preset condition, it indicates that a problem of an explosively growing amount of data computation in a Cartesian product execution process still occurs when the theta join operation is performed on the fields in the first type field group. Therefore, the fields in the first subtype field group are decomposed, to obtain the plurality of field groups, so that a quantity of fields that participate in the theta join operation is further reduced, and the data computation amount in the Cartesian product execution process is reduced.[0022] It can be learned, from the foregoing process, that the read data is grouped based on the to-be-built first field group and the to-be-built second field group in the execution plan, to form the field group data, and the join operation is performed on the field group data, so that a theta join operation is implemented in steps in a form of a plurality of field groups. This can reduce a data amount of Cartesian product calculation during one join operation, greatly reduce network transmission overheads, computing overheads, and memory overheads, and improve execution efficiency.[0053] It should be further noted that the first preset condition may be specifically: a quantity of fields in the table that are used in the query statement exceeds a first preset threshold; or storage overheads of the fields in the table that are used in the query statement exceed a first preset space threshold; or a quantity of fields in the table that are used in the query statement and that participate in the theta join operation exceeds a second preset threshold; or storage overheads of the fields in the table that are used in the query statement and that participate in the theta join operation exceed a second preset space threshold [53-59] elaborate on the matter) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Liu's join operations in order to improve the overall efficiency of the system (Liu [AB.] A control method for performing a multi-table join operation and a corresponding apparatus are disclosed. Fields in a table whose fields participating in a theta join operation and used in the query statement meet a first preset condition are decomposed, to obtain a plurality of first field groups, so that the theta join operation can be implemented in steps in a form of the plurality of field groups. This can reduce a data amount of Cartesian product calculation during one join operation, greatly reduce network transmission overheads, computing overheads, and memory overheads, and improve execution efficiency [0009] It can be learned, from the foregoing process, that the fields in the table whose fields participating in the theta join operation and used in the query statement meet the first preset condition are decomposed to obtain a plurality of first field groups, so that the theta join operation is implemented in steps in a form of the plurality of field groups. This can reduce a data amount of Cartesian product calculation during one join operation, greatly reduce network transmission overheads, computing overheads, and memory overheads, and improve execution efficiency.[0019] It can be learned from the foregoing process that when it is determined that the fields in the first subtype field group meet the second preset condition, it indicates that a problem of an explosively growing amount of data computation in a Cartesian product execution process still occurs when the theta join operation is performed on the fields in the first type field group. Therefore, the fields in the first subtype field group are decomposed, to obtain the plurality of field groups, so that a quantity of fields that participate in the theta join operation is further reduced, and the data computation amount in the Cartesian product execution process is reduced.[0022] It can be learned, from the foregoing process, that the read data is grouped based on the to-be-built first field group and the to-be-built second field group in the execution plan, to form the field group data, and the join operation is performed on the field group data, so that a theta join operation is implemented in steps in a form of a plurality of field groups. This can reduce a data amount of Cartesian product calculation during one join operation, greatly reduce network transmission overheads, computing overheads, and memory overheads, and improve execution efficiency.)
Corresponding system claim 29 is rejected similarly as claim 1 above. Additional Limitations: Device with processor(s) and memory (Kennis [FIG.1] contains corresponding hardware components [0186] Those skilled in the art will also appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, networked PCs, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.)
Corresponding method claim 30 is rejected similarly as claim 1 above.
Regarding claim 2, Kennis, Gladwin and Liu teach The computer-program product of claim 1, wherein, to dynamically select the set of predefined join operations from the plurality of predefined join operations of the mapping file, the one or more processor devices are to: identify a plurality of target tables from the plurality of tables, each of the plurality of target tables storing one or more of the plurality of target data elements; (Kennis [259] a target or destination table identifier in the staging database <staging_table>, one or more key fields <key_field> that identify keys to table(s) that are to be extracted, and one or more data fields <field> that identify particular data fields that are to be extracted. If desired, filters and queries can be embedded into the file to filter or retrieve particular data items. [0293] The mapper is responsive to an ontology source table 2111 to select only the needed data items from the source database in table 2101 and store those items as identified by the field names shown in the target table...[294] the selected subset of information from the source table 2101, and as was previously described, predetermined metadata is added to each entity created and stored, e.g. Revision_ID, Entity_ID, Entity_Version, Actor_ID, Update_Time. It will be further understood that the mapper 150 is responsive to a stored ontology mapping predetermined source table data fields to predetermined target table data fields [295] identify the source tables, entity names, target tables, and other required information. For example, the mapping file 2200 includes entity definitions delimited by entity tags <entity>, with associated information identifying a data source <source></source>, table names <name>, any database join operations that might be required to obtain the required data from multiple tables <join></join>, key fields that might be required <key></key>, field names within tables <field></field>, and a mapping of the corresponding filed name and table, e.g. a field with the name VENDOR_ID obtains its data from the source table and field VAB.ABALPH, as shown in the figure....[FIG.1] shows a corresponding visual of the system ) and select a first predefined join operation of the set of predefined join operations, wherein the first predefined join operation joins a first target table and a second target table of the plurality of target tables, wherein the first target table stores a first target data element of the plurality of target data elements, and wherein the second target table stores a second target data element of the plurality of target data elements. (Kennis [FIG.4&19] clearly show that there can be a plurality of target data tables and corresponding plurality of target data elements for each table [294] the selected subset of information from the source table 2101, and as was previously described, predetermined metadata is added to each entity created and stored, e.g. Revision_ID, Entity_ID, Entity_Version, Actor_ID, Update_Time. It will be further understood that the mapper 150 is responsive to a stored ontology mapping predetermined source table data fields to predetermined target table data fields [295] identify the source tables, entity names, target tables, and other required information. For example, the mapping file 2200 includes entity definitions delimited by entity tags <entity>, with associated information identifying a data source <source></source>, table names <name>, any database join operations that might be required to obtain the required data from multiple tables <join></join>, key fields that might be required <key></key>, field names within tables <field></field>, and a mapping of the corresponding filed name and table, e.g. a field with the name VENDOR_ID obtains its data from the source table and field VAB.ABALPH, as shown in the figure....[FIG.1] shows a corresponding visual of the system )
Regarding claim 3, Kennis, Gladwin and Liu teach The computer-program product of claim 2, wherein, to select the first predefined join operation of the set of predefined join operations, the one or more processor devices are to: select the first predefined join operation of the set of predefined join operations based on a relational logic portion of the mapping file, wherein the relational logic portion defines an existing direct relationship between the first target table and the second target table. (Kennis [0026] The mapping file identifies specific tables and column in a schema of a data source, and corresponding specific tables and columns in a schema of the monitoring database. [0027] In one embodiment, the compliance policy statements comprise logical expressions for evaluating data stored in the monitoring database against predetermined requirements, and indicators that represent the resolution of a logical expression...[0238] The ConfigureMapper routine includes steps for specifying parameters and aspects of the staging database 155, the monitoring database 175, the ontology files, entities involved or identified in an ontology, mappings of data items for entities, and other contexts and parameters. Configuration of the mapper as utilized in the present invention comprises identification of particular data tables and fields of data provided by an extraction from a monitored database, and maintenance of the relationship between such particular tables and fields of the monitored database with corresponding tables and fields of the monitoring database. Such information is reflected by and stored in the enterprise ontology. The ontology in particular represents the mapping of fields from monitoring databases into particular selected fields of the monitoring database. [290-297] go into further details on a relational logic portion of the mapping file, wherein the relational logic portion defines an existing direct relationship between the first target table and the second target table.)
Regarding claim 4, Kennis, Gladwin and Liu teach The computer-program product of claim 2, wherein, to dynamically select the set of predefined join operations from the plurality of predefined join operations of the mapping file, the one or more processor devices are further to: select a second predefined join operation and a third predefined join operation of the set of predefined join operations, wherein the second predefined join operation joins the first target table to a non-target table of the plurality of tables, wherein the third predefined join operation joins the non-target table to a third target table of the plurality of target tables, and wherein the third target table stores a third target data element of the plurality of target data elements. (Kennis [FIG.4&19] clearly show that there can be a plurality of target data tables and corresponding plurality of target data elements for each table [FIG.21&22] show corresponding mapping file with plurality predefined operations[294] the selected subset of information from the source table 2101, and as was previously described, predetermined metadata is added to each entity created and stored, e.g. Revision_ID, Entity_ID, Entity_Version, Actor_ID, Update_Time. It will be further understood that the mapper 150 is responsive to a stored ontology mapping predetermined source table data fields to predetermined target table data fields [295] identify the source tables, entity names, target tables, and other required information. For example, the mapping file 2200 includes entity definitions delimited by entity tags <entity>, with associated information identifying a data source <source></source>, table names <name>, any database join operations that might be required to obtain the required data from multiple tables <join></join>, key fields that might be required <key></key>, field names within tables <field></field>, and a mapping of the corresponding filed name and table, e.g. a field with the name VENDOR_ID obtains its data from the source table and field VAB.ABALPH, as shown in the figure....[FIG.1] shows a corresponding visual of the system )
Regarding claim 5, Kennis, Gladwin and Liu teach The computer-program product of claim 4, wherein, to select the second predefined join operation and the third predefined join operation of the set of predefined join operations, the one or more processor devices are to: obtain a first candidate route from the first target table to the third target table, wherein the first candidate route comprises the second predefined join operation and the third predefined join operation of the set of predefined join operations; obtain a second candidate route from the first target table to the third target table different than the first candidate route; (Kennis [FIG.1] shows overall flow of the steps [0237] The ConfigureExtractor routine includes steps for specifying enterprise systems or other data sources from which data is acquired or provided, as well as specifying primary key fields, field identifiers, filters, context, and parameters ... From FIG. 1.. [0238] The ConfigureMapper routine includes steps for specifying parameters and aspects of the staging database 155, the monitoring database 175, the ontology files, entities involved or identified in an ontology, mappings of data items for entities, and other contexts and parameters...[0239] The ConfigureCore routine includes steps for defining a set of policy statements or frames...[FIG.21&22] show corresponding mapping file with plurality predefined operations that can serve as "routes") It is important to note that the examiner is interpreting route to be plan/routine of steps and select the first candidate route based on one or more route selection criteria. (Kennis [0146] Frame: a computer-executable logical representation of a rule or set of rules, determined by a User (typically an Administrator type user) responsible for establishing compliance monitoring processes to implement a Policy, as applied to data or information reflecting one or more transactions or one or more data items of transactions.[0238] The ConfigureMapper routine includes steps for specifying parameters and aspects of the staging database 155, the monitoring database 175, the ontology files, entities involved or identified in an ontology, mappings of data items for entities, and other contexts and parameters...[0239] The ConfigureCore routine includes steps for defining a set of policy statements or frames...[FIG.1] shows overall flow of the system)
Regarding claim 6, Kennis, Gladwin and Liu teach The computer-program product of claim 5, wherein the first candidate route comprises a pre-defined route defined by the mapping file. (Kennis [FIG.21&22] show corresponding mapping file with plurality predefined operations that can serve as "routes" [201] The mapping data (e.g. in the form of mapping files and ontology files) establishes relationships between monitoring entities stored in the monitoring database and monitored entities from the ERP databases. A principal function of the mapper 150 is to transform data from various and disparate (and possibly heterogeneous) data sources into a shared schema or ontology, so that an analysis engine can examine and correlate data across the disparate systems and facilitate the preparation of policy statements that consider information from different data sources.)
Regarding claim 7, Kennis, Gladwin and Liu teach The computer-program product of claim 5, wherein, to obtain the first candidate route, the one or more processor device are to: compute the first candidate route based on the mapping file. (Kennis [FIG.21&22] show corresponding mapping file with plurality predefined operations that can serve as "routes" with joining operations [201] The mapping data (e.g. in the form of mapping files and ontology files) establishes relationships between monitoring entities stored in the monitoring database and monitored entities from the ERP databases. A principal function of the mapper 150 is to transform data from various and disparate (and possibly heterogeneous) data sources into a shared schema or ontology, so that an analysis engine can examine and correlate data across the disparate systems and facilitate the preparation of policy statements that consider information from different data sources. [295] the mapping file 2200 includes entity definitions delimited by entity tags <entity>, with associated information identifying a data source <source></source>, table names <name>, any database join operations that might be required to obtain the required data from multiple tables <join></join>, key fields that might be required <key></key>, field names within tables <field></field>, and a mapping ...)
Regarding claim 8, Kennis, Gladwin and Liu teach The computer-program product of claim 5, wherein the one or more route selection criteria comprises at least one of: a quantity of join operations included in a route; contents of one or more tables joined by one or more predefined joins of the set of predefined join operations selected prior to selection of the first candidate route; an estimated computational complexity associated with the route; a size of each table joined by the join operations included in the route; or a quantity of non-target tables joined by the join operations included in the route. (Kennis [0018] This complex analysis requires a combination of domain engineering, automated link analysis, behavior, deductive analysis, and standard business intelligence...[286] this architecture minimizes the complexity and computational load on any component that runs remotely from the TIM system 100. Furthermore, the partition of functionality into logical steps of extraction, caching, and mapping provides for an architectural separation of functions and asynchronous operation to improve performance [295] the mapping file 2200 includes entity definitions delimited by entity tags <entity>, with associated information identifying a data source <source></source>, table names <name>, any database join operations that might be required to obtain the required data from multiple tables <join></join>, key fields that might be required <key></key>, field names within tables <field></field>, and a mapping ...[FIG.21&22] show corresponding mapping file with plurality predefined operations that can serve as "routes" with joining operations)
Regarding claim 9, Kennis, Gladwin and Liu teach The computer-program product of claim 4, wherein the first target table comprises: a pre-derived table defined by the mapping file; or a materialized view table stored to the mapping file. (Kennis [FIG.15&19] clearly show that there can be a plurality of pre-derived tables and corresponding to the mapping file data [0024] files includes information identifying a data source containing information for utilization in the policy compliance monitoring system, access protocols for the data source, and predetermined tables and columns of tables of the data source.[0291] The mapping table or enterprise ontology, it will be recalled from earlier discussion, stores information and establishes the mapping between predetermined tables, fields and parameters of a source (monitored) database and corresponding tables, fields and parameters in the monitoring database.)
Regarding claim 10, Kennis, Gladwin and Liu teach The computer-program product of claim 1, wherein the structured data object comprises a temporary in-database view for the relational database. (Kennis [0106] FIG. 39 is an exemplary UI screen view of an exception discovered by link analysis that relates information of a vendor in the AP database and an employee in the human resource database. [0198] An extractor 140 is operative to interface with the various data sources such as monitored databases 120 and retrieve, be provided, or otherwise obtain data from such data sources and monitored databases. Extracted data from extractor 140 is stored in a staging database 155, which temporarily stores data so that the TIM system can operate out of band with respect to enterprise applications [FIG.5,11, & 19] show a few examples of in-database views which are temporary as part of inventions processes)
Regarding claim 16, Kennis, Gladwin and Liu teach The computer-program product of claim 1, wherein, to receive the input indicative of the selection of the plurality of target data elements, the one or more processor devices are to: receive, via a user interface, user input information indicative of selection of the plurality of target data elements from the plurality of data elements of the plurality of tables within the relational database. (Kennis [FIG.1] shows the system which using a user interface to receive user input for selection of the plurality of target data elements from the plurality of data elements of the plurality of tables within the relational database[0034] providing a user interface for allowing user access to and modification of the extractor files, the normalizing files, and the policy statements, for customization and configuration [0240] The ConfigureWorkbench routine includes steps for configuring access to the monitoring database (to obtain related entities), setting usernames and permissions, and configuring any reporting functions of the system. This routine enables a user to input or correct information [0367] FIG. 44 illustrates an exemplary case management user interface screen 4400 with the Status subsidiary tab selected. In this particular display, a user may activate a selector box to select a...)
Regarding claim 17, Kennis, Gladwin and Liu teach The computer-program product of claim 16, wherein the user input information is further indicative of selection of one or more pre-built filters of a plurality of pre-built filters of the mapping file. (Kennis [0237] The ConfigureExtractor routine includes steps for specifying enterprise systems or other data sources from which data is acquired or provided, as well as specifying primary key fields, field identifiers, filters, context, and parameters of accessing remote data sources. From FIG. 1, it will be recalled that the extractor 140 may be of various types, including a programmatic extractor 141a, master extractor 141b, a resync extractor 141c, a log extractor 141d, environmental source extractor 141e, or an external source extractor 141f. [0259] Each data extractor 141 makes reference to extractor data in the form of an extractor file stored in the knowledge base, for information specific to the data source from which data is extracted. In this regard, FIG. 9 illustrates an exemplary extractor file 900 according to aspects of the invention. Typically, each extractor file provides predetermined information needed to (a) access a particular data source, and (b) determine what specific data from that data source is to be obtained, i.e. what particular fields from what particular tables, and (c) where that data is to be stored or cached in the staging database. Thus, the exemplary extractor file 900 includes parameters or tags for a description of the extraction <description>, identification of the data source <extractor_name>, a source table identifier in the data source <source_table>, a target or destination table identifier in the staging database <staging_table>, one or more key fields <key_field> that identify keys to table(s) that are to be extracted, and one or more data fields <field> that identify particular data fields that are to be extracted. If desired, filters and queries can be embedded into the file to filter or retrieve particular data items.[FIG.9 & 21-23] show corresponding visual of the filters applied)
Regarding claim 18, Kennis, Gladwin and Liu teach The computer-program product of claim 17, wherein, to output the structured data object comprising the plurality of target data elements, the one or more processor devices are to: apply the one or more pre-built filters to the structured data object. (Kennis [0237] The ConfigureExtractor routine includes steps for specifying enterprise systems or other data sources from which data is acquired or provided, as well as specifying primary key fields, field identifiers, filters, context, and parameters of accessing remote data sources. From FIG. 1, it will be recalled that the extractor 140 may be of various types, including a programmatic extractor 141a, master extractor 141b, a resync extractor 141c, a log extractor 141d, environmental source extractor 141e, or an external source extractor 141f. [0259] Each data extractor 141 makes reference to extractor data in the form of an extractor file stored in the knowledge base, for information specific to the data source from which data is extracted. In this regard, FIG. 9 illustrates an exemplary extractor file 900 according to aspects of the invention. Typically, each extractor file provides predetermined information needed to (a) access a particular data source, and (b) determine what specific data from that data source is to be obtained, i.e. what particular fields from what particular tables, and (c) where that data is to be stored or cached in the staging database. Thus, the exemplary extractor file 900 includes parameters or tags for a description of the extraction <description>, identification of the data source <extractor_name>, a source table identifier in the data source <source_table>, a target or destination table identifier in the staging database <staging_table>, one or more key fields <key_field> that identify keys to table(s) that are to be extracted, and one or more data fields <field> that identify particular data fields that are to be extracted. If desired, filters and queries can be embedded into the file to filter or retrieve particular data items.[FIG.9 & 21-23] show corresponding visual of the filters applied)
Regarding claim 19, Kennis, Gladwin and Liu teach The computer-program product of claim 16, wherein, to receive the user input information via the user interface, the one or more processor devices are to: dynamically construct the user interface based at least in part on the mapping file, wherein the user interface comprises a plurality of selectable interface elements; cause display of the user interface; and responsive to causing display of the user interface, receive the user input information via the user interface. (Kennis [0034] providing a user interface for allowing user access to and modification of the extractor files, the normalizing files, and the policy statements, for customization and configuration. [0181] UI: User Interface. Typically means a software Application with which a User interacts for purposes of entering information, obtaining information, or causing functions of an associated system to execute [0357] FIG. 40 is an illustration of an exemplary case management user interface (UI) display screen associated with the analysis and reporting UI 180. This display is generated for users that have a number of exceptions assigned to them for handling and/or investigation and/or disposition. As in previous displays, a region 4000 is provided for a display of a number of exceptions, by ID, name, priority, owner, etc. A particular exception, GHOST_VENDOR-105302000364, is shown highlighted as having been selected by a user.[FIG.35 and 45] shows the user interface dynamically created for user input)
Regarding claim 20, Kennis, Gladwin and Liu teach The computer-program product of claim 19, wherein, to dynamically construct the user interface based at least in part on the mapping file, the one or more processor devices are to: based on the classification information of the mapping file, dynamically construct a first portion and a second portion of the user interface, wherein: the first portion of the user interface comprises a first subset of the plurality of selectable interface elements that represent a corresponding first subset of target data elements of the plurality of target data elements, wherein the first subset of target data elements is associated with a first element class of the plurality of element classes; (Kennis [FIG.45] shows the user interface with a plurality of portions dedicated to different functions related to target data [0357] FIG. 40 is an illustration of an exemplary case management user interface (UI) display screen associated with the analysis and reporting UI 180. This display is generated for users that have a number of exceptions assigned to them for handling and/or investigation and/or disposition. As in previous displays, a region 4000 is provided for a display of a number of exceptions, by ID, name, priority, owner, etc. A particular exception, GHOST_VENDOR-105302000364, is shown highlighted as having been selected by a user. The summary tab in a display region 4010 provides a display of particular information associated with the selected exception. In this case, information associated with the "ghost vendor" exception includes the exception name at 4002, a priority 4003, a potential impact 4004, a case manager assigned to the exception 4005, a confidence value 4006, and status information 4007, e.g. "Under Review." Other information such as secondary case managers 4012 are shown, as well as a scheme display and system display [0368] FIG. 45 illustrates an exemplary case management user interface screen that allows assignment of an exception to a particular user as investigator or case manager. A particular exception 4504 is shown selected, identified as VOUCHER_LINE_TO_DUPLICATE_PO-10503200000793. The Summary tab is shown selected. The display area associated with the Summary tab shows a number of data items or fields associated with this exception, including the Exception ID, Owner 4506, Status 4508, Priority (calculated as described elsewhere), Confidence (as described elsewhere), Category [FIG.35-44] show all the different parts and forms of the user interface which shows and elaborates on the different classes and plurality of target elements) and the second portion of the user interface comprises a second subset of the plurality of selectable interface elements that represent a corresponding second subset of target data elements of the plurality of target data elements, wherein the second subset of target data elements is associated with a second element class of the plurality of element classes different than the first element class. (Kennis [FIG.45] shows the user interface with a plurality of portions dedicated to different functions related to target data [0357] FIG. 40 is an illustration of an exemplary case management user interface (UI) display screen associated with the analysis and reporting UI 180. This display is generated for users that have a number of exceptions assigned to them for handling and/or investigation and/or disposition. As in previous displays, a region 4000 is provided for a display of a number of exceptions, by ID, name, priority, owner, etc. A particular exception, GHOST_VENDOR-105302000364, is shown highlighted as having been selected by a user. The summary tab in a display region 4010 provides a display of particular information associated with the selected exception. In this case, information associated with the "ghost vendor" exception includes the exception name at 4002, a priority 4003, a potential impact 4004, a case manager assigned to the exception 4005, a confidence value 4006, and status information 4007, e.g. "Under Review." Other information such as secondary case managers 4012 are shown, as well as a scheme display and system display [0368] FIG. 45 illustrates an exemplary case management user interface screen that allows assignment of an exception to a particular user as investigator or case manager. A particular exception 4504 is shown selected, identified as VOUCHER_LINE_TO_DUPLICATE_PO-10503200000793. The Summary tab is shown selected. The display area associated with the Summary tab shows a number of data items or fields associated with this exception, including the Exception ID, Owner 4506, Status 4508, Priority (calculated as described elsewhere), Confidence (as described elsewhere), Category [FIG.35-44] show all the different parts and forms of the user interface which shows and elaborates on the different classes and plurality of target elements)
Regarding claim 21, Kennis, Gladwin and Liu teach The computer-program product of claim 20, wherein, to generate the unit of software instructions that, when executed, implements the set of predefined join operations, the one or more processor devices are further to: cause display of a preview interface element within the user interface that depicts the unit of software instructions. (Kennis [FIG.45] shows the user interface with a preview interface element within the user interface that depicts the unit of software instructions which the user can input into [263] a scripting language that allows users to write programs to cause the export of selected data for external use. The SAP system and other similar systems output information in response to internal execution of such a script or program. The programmatic extractor 141a therefore represents the combination of (a) a scripting element operative internally to systems such as SAP that do not provide direct data querying, for internal retrieval of selected data and exporting such selected data, (b) a communications interface or file transfer mechanism for communicating data exported, and (c) a software component [265] An open SQL interface is available to ABAP programs as an internal API.)
Regarding claim 22, Kennis, Gladwin and Liu teach The computer-program product of claim 21, wherein the preview interface element comprises a text editor interface element configured to receive user inputs to modify the unit of software instructions. (Kennis [FIG.1] shows overall system in which user inputs to modify the unit of software instructions through user interface components [FIG.45] shows the user interface with a preview interface element within the user interface that depicts the unit of software instructions which the user can input into [263] a scripting language that allows users to write programs to cause the export of selected data for external use. The SAP system and other similar systems output information in response to internal execution of such a script or program. The programmatic extractor 141a therefore represents the combination of (a) a scripting element operative internally to systems such as SAP that do not provide direct data querying, for internal retrieval of selected data and exporting such selected data, (b) a communications interface or file transfer mechanism for communicating data exported, and (c) a software component [265] An open SQL interface is available to ABAP programs as an internal API.)
Regarding claim 23, Kennis, Gladwin and Liu teach The computer-program product of claim 22, wherein, to cause display of the preview interface element within the user interface, the one or more processor devices are further to: receive modification input information obtained via the preview interface element, wherein the modification input information is descriptive of one or more modifications to the unit of software instructions; and wherein, to execute the unit of software instructions to retrieve the plurality of target data elements, the one or more processor devices are to: apply the one or more modifications to the unit of software instructions. (Kennis [FIG.1] shows overall system in which user inputs to modify the unit of software instructions through user interface components [FIG.45] shows the user interface with a preview interface element within the user interface that depicts the unit of software instructions which the user can input into [192] Users 101 of the TIM system interact with the system via a user interface (UI) comprising a personal computer or terminal and associated display for configuring the system, constructing and maintaining the information such as policy statements, ontology mappings, extraction requirements, etc. [263] a scripting language that allows users to write programs to cause the export of selected data for external use. The SAP system and other similar systems output information in response to internal execution of such a script or program. The programmatic extractor 141a therefore represents the combination of (a) a scripting element operative internally to systems such as SAP that do not provide direct data querying, for internal retrieval of selected data and exporting such selected data, (b) a communications interface or file transfer mechanism for communicating data exported, and (c) a software component [265] An open SQL interface is available to ABAP programs as an internal API.)
Regarding claim 24, Kennis, Gladwin and Liu teach The computer-program product of claim 19, wherein, to cause display of the user interface, the one or more processor devices are further to: cause display of a pause interface element within the user interface configured to pause execution of the unit of software instructions; (Kennis [0239] The ConfigureCore routine includes steps for defining a set of policy statements or frames, including identifying the transaction involved in the policy, any required support entities, any indicators of the policy, and other frame or policy parameters. This routine enables a user to access to stored policy statements (as expressed in XML frames in the disclosed embodiment) so as to create new statements, update existing policy statements, activate or deactivate particular statements, change the sequence of statement (frame) execution, and provide any other required administrative functions for determining or modifying the logic or expressions associated with frame execution.[0333] At runtime, and as shown at 3140, the preferred CORE process 160 retrieves a set of frames from the knowledge base 165 and executes the frames in a predetermined sequence. In the event that a particular frame should not execute, it would possess a <frameoff> tag. Thus, it will be seen in the runtime sequence at 3140 that base frame 1 3002 executes, custom frame 2 3114 executes, base frame 3 3106 executes, etc., while all frames possessing a <frameoff> tag are not executed. In this manner, a predetermined set of base frames may be called and executed, may be selectively turned off, and may be modified so as to reflect particular circumstances of a particular enterprise and execute in place of a different base frame, or new frames may be created, as desired by a system administrator.[FIG.48] shows corresponding visual) and wherein, to execute the unit of software instructions to retrieve the plurality of target data elements, the one or more processor devices are to: execute the unit of software instructions to retrieve the plurality of target data elements; (Kennis [0259] Each data extractor 141 makes reference to extractor data in the form of an extractor file stored in the knowledge base, for information specific to the data source from which data is extracted. In this regard, FIG. 9 illustrates an exemplary extractor file 900 according to aspects of the invention. Typically, each extractor file provides predetermined information needed to (a) access a particular data source, and (b) determine what specific data from that data source is to be obtained, i.e. what particular fields from what particular tables, and (c) where that data is to be stored or cached in the staging database. Thus, the exemplary extractor file 900 includes parameters or tags for a description of the extraction <description>, identification of the data source <extractor_name>, a source table identifier in the data source <source_table>, a target or destination table identifier in the staging database <staging_table>, one or more key fields <key_field> that identify keys to table(s) that are to be extracted, and one or more data fields <field> that identify particular data fields that are to be extracted. If desired, filters and queries can be embedded into the file to filter or retrieve particular data items.[0282] the mapper for its operations is provided in an ontology target table 1912 that provides information that identifies field names and parameters of the data, after mapping, as the data is stored in the monitoring database. The ontology target table 1912 also identifies what metadata 1915 is associated with each entity provided by an extraction.[283-285 &293-297] elaborates on the matter) receive subsequent user input information obtained via the pause interface element of the user interface, wherein the subsequent user input information is indicative of selection of the pause interface element; and responsive to the subsequent user input information, pause execution of the unit of software instructions. (Kennis [0239] The ConfigureCore routine includes steps for defining a set of policy statements or frames, including identifying the transaction involved in the policy, any required support entities, any indicators of the policy, and other frame or policy parameters. This routine enables a user to access to stored policy statements (as expressed in XML frames in the disclosed embodiment) so as to create new statements, update existing policy statements, activate or deactivate particular statements, change the sequence of statement (frame) execution, and provide any other required administrative functions for determining or modifying the logic or expressions associated with frame execution.[0333] At runtime, and as shown at 3140, the preferred CORE process 160 retrieves a set of frames from the knowledge base 165 and executes the frames in a predetermined sequence. In the event that a particular frame should not execute, it would possess a <frameoff> tag. Thus, it will be seen in the runtime sequence at 3140 that base frame 1 3002 executes, custom frame 2 3114 executes, base frame 3 3106 executes, etc., while all frames possessing a <frameoff> tag are not executed. In this manner, a predetermined set of base frames may be called and executed, may be selectively turned off, and may be modified so as to reflect particular circumstances of a particular enterprise and execute in place of a different base frame, or new frames may be created, as desired by a system administrator.[FIG.48] shows corresponding visual)
Regarding claim 25, Kennis, Gladwin and Liu teach The computer-program product of claim 17, wherein, to receive the input indicative of the selection of the plurality of target data elements, the one or more processor devices are further to: select the mapping file from a plurality of candidate mapping files based on at least one of: a type of content associated with the plurality of tables within the relational database; (Kennis [0161] schema (e.g. of an enterprise database) into data items ... changing the data type, etc. See also Mapping, Ontology. E.g. Data items may be normalized by mapping them into a different naming and data storage schema, in accordance with ontology.[0201] A mapper 150 is operative to retrieve data from the staging database 155 and normalize, transform or map that information into a predetermine format to comprise monitoring entities, which are then stored in a monitoring database 175. The monitoring database 175 stores monitoring entities, both support and transactional, identified by table and field names in accordance with mapping data stored in a knowledge base 165. The mapping data (e.g. in the form of mapping files and ontology files) establishes relationships...[0248] target entity tables in the staging database ... Table names, fields, data types, and even values may have to be transformed according to the system ontology. [0280] FIG. 19 schematically illustrates the operation of the mapper 150 in accordance with certain aspects of the present invention. The mapper 150 operates in accordance with information stored in an ontology source table 1911 that comprises the enterprise ontology with respect to the particular data source involved in an extraction. The ontology source table 1911 contains information that correlates tables, fieldnames, field parameters, data types...[0296] FIG. 23 illustrates an exemplary ontology file 2300 according to aspects of the invention. As with the mapping file 2200, the ontology file is utilized by the mapper...file also includes one or more field identifiers <field></field> that specify data fields of records in the monitoring database; each of these fields has a corresponding name, description, and type, as identified with corresponding tags.) one or more element classes associated with the plurality of target data elements of the plurality of element classes; or the plurality of target data elements. (Kennis [0259] Each data extractor 141 makes reference to extractor data in the form of an extractor file stored in the knowledge base, for information specific to the data source from which data is extracted. In this regard, FIG. 9 illustrates an exemplary extractor file 900 according to aspects of the invention. Typically, each extractor file provides predetermined information needed to (a) access a particular data source, and (b) determine what specific data from that data source is to be obtained, i.e. what particular fields from what particular tables, and (c) where that data is to be stored or cached in the staging database. Thus, the exemplary extractor file 900 includes parameters or tags for a description of the extraction <description>, identification of the data source <extractor_name>, a source table identifier in the data source <source_table>, a target or destination table identifier in the staging database <staging_table>, one or more key fields <key_field> that identify keys to table(s) that are to be extracted, and one or more data fields <field> that identify particular data fields that are to be extracted. If desired, filters and queries can be embedded into the file to filter or retrieve particular data items.[0282] the mapper for its operations is provided in an ontology target table 1912 that provides information that identifies field names and parameters of the data, after mapping, as the data is stored in the monitoring database. The ontology target table 1912 also identifies what metadata 1915 is associated with each entity provided by an extraction.[283-285 &293-297] elaborates on the matter)
Regarding claim 26, Kennis, Gladwin and Liu teach The computer-program product of claim 17, wherein, to receive the user input information via the user interface, the one or more processor devices are further to receive additional user input information descriptive of a user-created filter via the user interface; and wherein, to output the structured data object comprising the plurality of target data elements, the one or more processor devices are to: apply the user-created filter to the structured data object. (Kennis [FIG.1] shows the system which using a user interface to receive user input (a plurality of inputs/a plurality of corresponding filters applied) for selection of the plurality of target data elements from the plurality of data elements of the plurality of tables within the relational database [FIG.45] shows the user interface with a plurality of portions dedicated to different functions related to target data [0034] providing a user interface for allowing user access to and modification of the extractor files, the normalizing files, and the policy statements, for customization and configuration [0240] The ConfigureWorkbench routine includes steps for configuring access to the monitoring database (to obtain related entities), setting usernames and permissions, and configuring any reporting functions of the system. This routine enables a user to input or correct information [0367] FIG. 44 illustrates an exemplary case management user interface screen 4400 with the Status subsidiary tab selected. In this particular display, a user may activate a selector box to select a...[FIG.35-44] show all the different parts and forms of the user interface which shows and elaborates on the different classes and plurality of target elements)
Regarding claim 27, Kennis, Gladwin and Liu teach The computer-program product of claim 26, wherein the one or more processor devices are further to: modify the mapping file to add the user-created filter to a plurality of pre-built filters of the mapping file based on an inclusion criterion. (Kennis [FIG.1] shows the flow which allows the user to modify the file and add extra pre built filters [296] file is expressed in XML, and includes a number of data items, identified with XML tags, that are required to create tables in the mapping database and set up appropriate fields and names that may be utilized in creating and executing policy statements or frames. For example, the exemplary ontology file includes information identifying an entity that is a monitoring entity in the monitoring database, shown as <entity></entity>. This entity includes certain data items such as a name <name></name>, a title <title></title>, a description <description></description>, an identifier <typeid></typeid>, and a linkage to one or more related entities <linkage></linkage>. The ontology file also includes one or more field identifiers <field></field> that specify data fields of records in the monitoring database; each of these fields has a corresponding name, description, and type, as identified with corresponding tags. [273] updated data (updated fields in existing records or rows), and the data for the selected fields is inserted into the staging database. Status data (such as a "modified" flag and/or timestamp) is further appended to flag the data as new or changed, for the mapper or other processes... [298] files can be modified and overridden element by element by more customized knowledge files )
Regarding claim 28, Kennis, Gladwin and Liu teach The computer-program product of claim 27, wherein the inclusion criterion comprises a number of prior occurrences in which information descriptive of the user-created filter was received via the user interface. (Kennis [248] target entity tables in the staging database or in the monitoring database are not necessarily exactly copies of the source tables. Table names, fields, data types, and even values may have to be transformed according to the system ontology. In accordance with aspects of the invention, each update in a source entity that is monitored should result in creation of a new "revision" created in the target or monitoring database. This advantageously allows the TIM system to retain and reason about past updates. [0261] FIG. 11 provides an example of how changed entities are detected and identified, by receipt of changed data from an exemplary ERP database 121. Table 1110 illustrates an exemplary vendor table that shows a plurality of support entities, i.e. vendors in an accounts payable system, at an initial data load or extraction by a master extractor, at time t1. Assume that an actor makes a number of changes to information related to a particular vendor, say Vendor 1, over a period of time, e.g. an address change, a bank account change, and address change back to the original address, etc. Table 1120 illustrates a plurality of prior versions of the entity Vendor 1 at various points in time t1, t2, t3 . . . up until a current version at time t_last_update. Assume a series of further updates...[0300] subsequent transactions can be used to build on the conclusions related to previous transactions. This new exception is added to the exception database, in a format (data model) described elsewhere herein. Any required potential impact and probabilities are computed. A natural language description of the exception is generated based on a template in the frame, all basis revisions are determined and saved, and a wariness value for each entity underlying the exception is updated based on exception. Basis revisions are the entities (actually a specific revision of the entities) upon which an exception is based. This includes the single transactional entity (i.e. the new revision of an entity that has changed)
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Kennis in view of US 20240265014 A1; Agrawal; Kireet et al. (hereinafter Agrawal)
Regarding claim 11, Agrawal teaches The computer-program product of claim 1, the prior art lacks explicitly and orderly teaching all of wherein, to receive the input indicative of selection of the plurality of target data elements, the one or more processor devices are to: receive a natural language input from a user via a user interface, wherein the natural language input is descriptive of a query for the relational database; and perform a similarity search between the natural language input and a plurality of semantic descriptors respectively representing the plurality of data elements to identify a subset of data elements from the plurality of data elements. However Agrawal teaches receive a natural language input from a user via a user interface, wherein the natural language input is descriptive of a query for the relational database; and perform a similarity search between the natural language input and a plurality of semantic descriptors the plurality of data elements to identify a subset of data elements from the plurality of data elements. (Agrawal [0170] The natural language processing unit 3710 may receive input data including a natural language string, such as a natural language string generated in accordance with user input. The natural language string may represent a data request expressed in an unrestricted natural language form, for which data identified or obtained prior to, or in conjunction with, receiving the natural language string by the natural language processing unit 3710 indicating the semantic structure[0171] The natural language processing unit 3710 may analyze, process, or evaluate the natural language string, or a portion thereof, to generate or determine the semantic structure, correlation to the low-latency data access and analysis system 3000, or both, for at least a portion of the natural language string. For example, the natural language processing unit 3710 may identify one or more words or terms in the natural language string and may correlate the identified words... [0174] As used herein, the term “utility” refers to a computer accessible data value, or values, representative of the usefulness of an aspect of the low-latency data access and analysis system, such as a data portion, an object, or a component of the low-latency data access and analysis system with respect to improving the efficiency, accuracy, or both, of the low-latency data access and analysis system.[0231] The casting similarity threshold, string similarity threshold, the join score threshold, and the weights used to obtain of join scores can be empirically obtained. In an example, an evaluation dataset that includes schemas (e.g., table names, column names, and data types) and valid joins (e.g., joins that are parsed or derived based on curated data query statements) from existing data sources can be obtained using a Natural Language to Structured Query Language (SQL) translation task (text2sql). Text2Sql is a standard machine learning task that can be used to convert natural language statements to SQL queries. The evaluation dataset is used to test the techniques described herein and refine the thresholds and weights used.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all take all prior methods and make the addition of Agrawal in order to create a more efficient and accurate system for joining tables (Agarwal [0035] The low-latency memory 1300 may be used to store data that is analyzed or processed using the systems or methods described herein. For example, storage of some or all data in low-latency memory 1300 instead of static memory 1200 may improve the execution speed of the systems and methods described herein by permitting access to data more quickly by an order of magnitude or greater (e.g., nanoseconds instead of microseconds [0049] The internal data, internal data structures, or both may accurately represent and may differ from the enterprise data, the data structures of the enterprise data, or both. In some implementations, enterprise data from multiple external data sources may be imported into the internal database analysis portion 2200. For simplicity and clarity, data stored or used in the internal database analysis portion 2200 may be referred to herein as internal data. For example, the internal data, or a portion thereof, may represent, and may be distinct from, enterprise data imported into or accessed by the internal database analysis portion 2200. [0107] The distributed in-memory ontology unit 3500 may implement optimistic locking to reduce lock contention times. The use of optimistic locking permits improved throughput of data through the distributed in-memory ontology unit 3500.)
Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kennis in view of Agrawal and US 20210249002 A1; AHMADIDANESHASHTIANI; MohammadHosein et al. (hereinafter Ahmad)
Regarding claim 12, Kennis and Agrawal teach The computer-program product of claim 11, wherein, to receive the natural language input descriptive of the query for the relational database, the one or more processor devices are further to the prior arts lack explicitly and orderly teaching identify an element of sensitive information within the query for the relational database; and replace the element of sensitive information with an element of placeholder information. However Ahmad teaches identify an element of sensitive information within the query for the relational database; and replace the element of sensitive information with an element of placeholder information. ( Ahmad [0025] The processor identifies, from the input strings, sensitive query tokens that need to be redacted or sanitized (e.g., payee names, account numbers). The input strings are first transformed into an obfuscated query string by replacing the sensitive query tokens with placeholder query tokens, which is then provided to natural language processing engines for intent detection to receive a response intent data object having the placeholder query tokens.[0081] The user interface 102 may be provided behind an optional firewall 104. The broker processor 108 identifies, from the input strings, sensitive query tokens that need to be redacted or sanitized (e.g., payee names, account numbers). The input strings are first transformed into an obfuscated query string by replacing the sensitive query tokens with placeholder query tokens. FIG. 7 shows an example flow that can be handled by broker processor 108, for example.[FIG.7] shows corresponding visual) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods make the addition of Ahmad in order to improve the natural language processing abilities of the system and create an improved secure system for data transfer (Ahmad [0009] As described in various embodiments herein, improved architectures for natural language processing in relation to automated conversational agents are provided. Corresponding computer systems, methods, and computer program products stored in the form of non-transitory computer readable media having machine interpretable instructions thereon are contemplated.[0012] The proposed flexible implementation provides an improved ease of scalability and flexibility as the orchestrator is de-coupled from being reliant on specific natural language processing/natural language understanding implementations, and different or new natural language processing/natural language understanding engines can be engaged that are estimated to best fit a particular context or utterance, and the user experience remains consistent as the user is not aware of the routing changes in the backend during the front-end conversation flow.[0013] The orchestration system provides improved flexibility in selecting a specific natural language processing agent (including natural language understanding agents, which are a subset of natural language processing agents) that is well suited for a particular task given contextual cues either in the input string itself, and/or in external data, such as information stored in a user profile related to a user, information stored in a user profile related to a group of users or a demographic of users similar to the user.)
Regarding claim 13, Kennis, Agrawal and Ahmad teach The computer-program product of claim 11, wherein, to perform the similarity search, the one or more processor devices are to: generate an intermediate representation of the query for the relational database; and perform the similarity search between the intermediate representation of the query for the relational database and a plurality of intermediate representations that respectively represent the plurality of semantic descriptors. (Agrawal [0075] The respective in-memory database instances may receive the corresponding query execution instructions from the query coordinator. The respective in-memory database instances may execute the corresponding query execution instructions to obtain, process, or both, data (intermediate results data) from the low-latency data. The respective in-memory database instances may output, or otherwise make available, the intermediate results data, such as to the query coordinator.[0076] The query coordinator may execute a respective portion of query execution instructions (allocated to the query coordinator) to obtain, process, or both, data (intermediate results data) from the low-latency data. The query coordinator may receive, or otherwise access, the intermediate results data from the respective in-memory database instances. The query coordinator may combine, aggregate, or otherwise process, the intermediate results data to obtain results data.[0077] In some embodiments, obtaining the intermediate results data by one or more of the in-memory database instances may include outputting the intermediate results data to, or obtaining intermediate results data from, one or more other in-memory database instances, in addition to, or instead of, obtaining the intermediate results data from the low-latency data.[FIG.4] shows corresponding flow)
Claims 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kennis in view of Agrawal and US 20240362278 A1; PerezLeon; Francisco et al. (hereinafter Perez).
Regarding claim 14, Kennis and Agrawal teach The computer-program product of claim 11, wherein the unit of software instructions comprises a Structured Query Language (SQL) query, and wherein, to generate the unit of software instructions, the one or more processor devices are to … and modify the SQL query such that the SQL query implements the set of predefined join operations when the SQL query is executed. (Agrawal [0025] The identified join candidates can be stored in the schema of the database and can be used by users to formulate and execute queries [0231] The casting similarity threshold, string similarity threshold, the join score threshold, and the weights used to obtain of join scores can be empirically obtained. In an example, an evaluation dataset that includes schemas (e.g., table names, column names, and data types) and valid joins (e.g., joins that are parsed or derived based on curated data query statements) from existing data sources can be obtained using a Natural Language to Structured Query Language (SQL) translation task (text2sql). Text2Sql is a standard machine learning task that can be used to convert natural language statements to SQL queries. The evaluation dataset is used to test the techniques described herein and refine the thresholds and weights used. The weights for the weighted combination/sum of the heuristics were found using a randomized grid search of the weights that maximized the number of verified matches from the test2sql evaluation dataset. [237] the selected join candidate in the worksheet object may include converting the selected join candidate to one or more join paths and/or worksheet-column definition according to a syntax or semantics of the worksheet object. A data query according to the worksheet object can be generated such that the data query includes join criteria according to the selected join candidate(s). Tabular data can be obtained in response to performing or executing the data query.[238-241] further elaborate on the matter) the combination lack explicitly teaching process the natural language input and a contextual input with a machine-learned Large Language Model (LLM) to generate the SQL query, the SQL query comprising a subset of semantic descriptors from the plurality of semantic descriptors, wherein each of the subset of semantic descriptors describes a corresponding data element of the subset of data elements; However Perez teaches process the natural language input and a contextual input with a machine-learned Large Language Model (LLM) to generate the SQL query, the SQL query comprising a subset of semantic descriptors from the plurality of semantic descriptors, wherein each of the subset of semantic descriptors describes a corresponding data element of the subset of data elements; (Perez [0027] A system is disclosed for online banking services powered by artificial intelligence technology including chatbots and generative pre-trained transformer families of Large Language Models (LLMs). The system is capable of taking natural language and producing financial reporting materials in the form of text, tables, and or visualizations using generative AI technology. Use of the system may include 3 steps: [0028] 1. A user begins a session. The system feeds a large language model (LLM) context about the user, such as bank names, account names, database schema, etc., as well as establishing guardrails to limit the scope of the conversations the user may have with the system. [0029] 2. The user interacts with the system by asking it questions in natural language. The system then interacts with the LLM behind the scenes and the LLM generates computer code, such as structured query language (SQL) queries to elicit the answers to the user's question from an indexed interactive financial platform. [0030] 3. The indexed interactive financial platform executes code provided by the LLM and decides which form the output may be displayed in, such as simple text, a table of data, or a visualization. The results are provided to the user.[0112] The user interface logic 308 may be operated to configure the indexing module 330 with multiple tags to configure a multi-level control structure.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Perez in order to create a more efficient and accurate query translation method (Perez [0025] a natural language interface backed by a secure indexed interactive financial platform offers numerous advantages over building bespoke features into a preconfigured user interface. The disclosed solution supports a more intuitive, user-friendly experience that allows users to easily access vital treasury functions and actions such as money movement through conversational language. This streamlined interaction reduces the learning curve and implementation time typically associated with bespoke features. [0032] In this manner, the disclosed system supports on-demand retrieval by client devices of highly customized information for use in analytics and reporting, based on natural language queries from human clients. The system uses recently and periodically acquired data sets from disparate computer server systems with improved performance and lower latency than is available with conventional approaches. The disclosed system thus provides increased flexibility, faster search times, and potentially reduced memory requirements for data storage.[0114] The tags are utilized to build a query structure for refining and/or enhancing the set of returned transaction data in response to a query. The tags implement a nodal structure for transaction data by combining tagged data into data sets. When tags are combined any duplicate entries are identified to avoid collision (double counting). [0168] The decoupling of transaction indexing from ingest, of transaction indexing from formation of the control structure 1010 imposed on the indexed transactions 1006, and of both indexing and formation of the control structure 1010 from runtime filtering, may substantially improve both performance of the search engine 1004 and the flexibility and richness of the results 1008 generated in response to the queries 1002.)
Regarding claim 15, Kennis, Agrawal and Perez teach The computer-program product of claim 14, wherein, to modify the SQL query such that the SQL query implements the set of predefined join operations when the SQL query is executed, the one or more processor devices are to: replace each of the subset of semantic descriptors within the SQL query with a corresponding data element label of a plurality of data element labels, wherein the plurality of data element labels respectively identify the plurality of data elements within the relational database; (Perez [0113] Index settings may be implemented as tags that transform the identified transaction data. The indexing module 330 receives normalized transaction data from the ingest module 302 and transforms the normalized data through the application of the tags that label the transaction data associated with the query. This process may be performed asynchronously from the operation of the outflow module 304.[0114] The tags are utilized to build a query structure for refining and/or enhancing the set of returned transaction data in response to a query. The tags implement a nodal structure for transaction data by combining tagged data into data sets. When tags are combined any duplicate entries are identified to avoid collision (double counting). A combination of tags may be applied to form sets of transaction data meeting complex criteria.[0156] The tagging logic 802 allows the configuration of tags comprising settings. The tag descriptor setting 804 is a label to concisely...) and modify the SQL query to implement the set of predefined join operations. (Agrawal [0025] The identified join candidates can be stored in the schema of the database and can be used by users to formulate and execute queries [0231] The casting similarity threshold, string similarity threshold, the join score threshold, and the weights used to obtain of join scores can be empirically obtained. In an example, an evaluation dataset that includes schemas (e.g., table names, column names, and data types) and valid joins (e.g., joins that are parsed or derived based on curated data query statements) from existing data sources can be obtained using a Natural Language to Structured Query Language (SQL) translation task (text2sql). Text2Sql is a standard machine learning task that can be used to convert natural language statements to SQL queries. The evaluation dataset is used to test the techniques described herein and refine the thresholds and weights used. The weights for the weighted combination/sum of the heuristics were found using a randomized grid search of the weights that maximized the number of verified matches from the test2sql evaluation dataset. [237] the selected join candidate in the worksheet object may include converting the selected join candidate to one or more join paths and/or worksheet-column definition according to a syntax or semantics of the worksheet object. A data query according to the worksheet object can be generated such that the data query includes join criteria according to the selected join candidate(s). Tabular data can be obtained in response to performing or executing the data query.)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARYAN D TOUGHIRY whose telephone number is (571)272-5212. The examiner can normally be reached Monday - Friday, 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571) 270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ARYAN D TOUGHIRY/Examiner, Art Unit 2165
/ALEKSANDR KERZHNER/Supervisory Patent Examiner, Art Unit 2165