Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are active in this application.
Claim Objections
Claim 20 is objected to because of the following informalities: “aa first attribute” is suggested to change to --a first attribute--.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-13, 16-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-12, 15-20 of U.S. Patent No. 12,210,501. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims recite substantially similar claim limitations as depicted in the table below.
Instant Application
Patent # 12,210,501
1. A system for resolving corrupted datasets transferred to data repositories having differing physical data models using programming language-agnostic data modeling platforms, the system comprising:
one or more processors; and
a non-transitory, computer-readable medium comprising instructions recorded thereon that, when executed by the one or more processors, cause operations comprising:
receiving a data transfer request to perform a transfer of a dataset from a first data repository to a second data repository, wherein the first data repository is associated with a first physical data model of a first entity, and wherein the second data repository is associated with a second physical data model that is different from the first physical data model;
in response to receiving the data transfer request, identifying, based on a dataset description of the dataset, a first logical data model to be used in connection with performing the transfer of the dataset from the first data repository to the second data repository;
determining a first supplemental data structure for the first logical data model, wherein the first supplemental data structure is expressed in a standardized language;
performing the transfer of the dataset from the first data repository to the second data repository using the first supplemental data structure;
in connection with performing the transfer of the dataset, receiving a data transfer error message from a second entity that is associated with the second data repository; and
in response to receiving the data transfer error message, transmitting executable code to the second entity that is associated with the second data repository, wherein the executable code corresponds to a data analytic operation to be performed on the dataset to resolve a dataset error.
2. A method for resolving corrupted datasets using programming language-agnostic data modeling platforms, the method comprising:
receiving a request to perform a first data operation on a first dataset from a first data source of a first entity;
in response to receiving the request, identifying, based on a first dataset description of the first dataset, a first logical data model to be used in connection with performing the first data operation on the first dataset;
determining a first supplemental data structure for the first logical data model, wherein the first supplemental data structure is expressed in a standardized language;
performing the first data operation on the first dataset using the first supplemental data structure;
receiving a first data operation error message that indicates an error that occurred during performance of the first data operation on the first dataset; and
transmitting, to a second entity, executable code corresponding to a second data operation to be performed on the first dataset to resolve the error.
8. The method of claim 2, wherein using the first supplemental data structure to perform the first data operation comprises generating a first mapping, based on the first logical data model, for performing the first data operation on the first dataset, and wherein the first mapping maps a first attribute of the first supplemental data structure to a physical data model of a second entity.
3. The method of claim 2, wherein identifying the first logical data model further comprises: providing the first dataset description of the first dataset as input to a first artificial intelligence model trained to identify logical data models to perform data operations on datasets; receiving, from the first artificial intelligence model, a ranked set of logical data models, wherein each ranked logical data model of the ranked set of logical data models are ranked based on a confidence value indicating a likelihood that the first dataset uses a respective ranked logical data model of the ranked set of logical data models; and identifying the first logical data model based on a selection of a respective logical data model that satisfies a threshold confidence value from the ranked set of logical data models.
4. The method of claim 3, wherein the first artificial intelligence model comprises a Large Language Model (LLM), and wherein the LLM is trained, the training of the LLM comprising: obtaining a set of training datasets and a set of training logical data model descriptions, wherein each training dataset of the set of training datasets corresponds to a training logical data model description of the set of training logical data model descriptions, and wherein each training logical data model description of the set of training logical data model descriptions is associated with a metadata schema; providing the set of training datasets and the set of training logical data model descriptions to the LLM during a training routine, the LLM being communicatively coupled to a retrieval component configured to retrieve (i) similar logical data models historically used in connection with a dataset and (ii) metadata schemas associated with the similar logical data models to be provided to the LLM; receiving, from the LLM during the training routine, a set of candidate logical data models and corresponding metadata schemas based on (i) the similar logical data models historically used in connection with a dataset and (ii) the metadata schemas associated with the similar logical data models; and in response to receiving the set of candidate logical data models and the corresponding metadata schemas, providing a message, during the training routine, to the LLM comprising an accuracy value corresponding to each candidate logical data model and corresponding metadata schemas.
5. The method of claim 2, further comprising: extracting, from the request to perform the first data operation on the first dataset from the first data source of the first entity, an identifier associated with the first dataset; obtaining, based on the identifier, the first dataset from a data repository storing datasets; and determining the first dataset description of the first dataset based on metadata associated with the first dataset.
6. The method of claim 2, wherein determining the first supplemental data structure for the first logical data model further comprises: providing an identifier associated with the first logical data model as input to a second artificial intelligence model configured to determine supplemental data structures for logical data models; and receiving, from the second artificial intelligence model, the first supplemental data structure for the first logical data model, wherein the first supplemental data structure comprises a first attribute, and wherein the first attribute comprises a first transformer lineage of the first logical data model.
7. The method of claim 6, wherein the second artificial intelligence model comprises a transformer model, and wherein the transformer model is trained, the training of the transformer model comprising: obtaining (i) a set of training logical data model descriptions and (ii) a training set of supplemental data structures expressed in a standardized language each comprising a second attribute, wherein each training logical data model description of the set of training logical data model descriptions is associated with a metadata schema, and wherein the second attribute of each supplemental data structure of the training set of supplemental data structures comprises a second transformer lineage of a training logical data model corresponding to a respective training logical data model description of the set of training logical data model descriptions; providing the set of training logical data model descriptions and the training set of supplemental data structures as input to the transformer model during a self-supervised training routine; and generating, during the self-supervised training routine, a set of candidate supplemental data structures expressed in a standardized language each comprising a third attribute, wherein the third attribute of each candidate supplemental data structure of the set of candidate supplemental data structures comprises a third transformer lineage of a respective training logical data model corresponding to a respective training logical data model description of the set of training logical data model descriptions.
9. The method of claim 8, wherein generating the first mapping further comprises: receiving a message associated with the physical data model of the second entity; providing (i) the first logical data model and (ii) the message associated with the physical data model of the second entity to a third artificial intelligence model trained to generate mappings between logical data models and physical data models, wherein the third artificial intelligence model is communicatively coupled to a retrieval component configured to retrieve supplemental data structures of logical data models; and receiving, from the third artificial intelligence model, the first mapping for performing the first data operation on the first dataset, wherein the first mapping maps the first attribute of the first supplemental data structure to the physical data model of the second entity.
10. The method of claim 9, wherein the third artificial intelligence model comprises a Large Language Model (LLM), and wherein the LLM is trained, the training of the LLM comprising: obtaining (i) a set of training logical data model descriptions, (ii) a set of training physical data models, and (iii) a set of training mappings, wherein the set of training mappings are based on training supplemental data structures corresponding to a respective training logical data model description of the set of training logical data model descriptions, and wherein the set of training mappings maps a training attribute of a training supplemental data structure to a training physical data model; providing, (i) the set of training logical data model descriptions, (ii) the set of training physical data models, and (iii) the set of training mappings, to the LLM during a training routine, wherein the LLM is communicatively coupled to the retrieval component, and wherein during the training routine the LLM retrieves, from the retrieval component, a set of candidate supplemental data structures respective to a set of training logical data models corresponding to the set of training logical data model descriptions; receiving, from the LLM, during the training routine, a set of candidate mappings, wherein each candidate mapping of the set of candidate mappings maps a second training attribute of a training supplemental data structure to respective training physical data model of the set of training physical data models; and in response to receiving the set of candidate mappings, providing a second message, during the training routine, to the LLM comprising an accuracy value corresponding to each candidate mapping of the set of candidate mappings.
11. The method of claim 2, further comprising: providing the first data operation error message to a fourth artificial intelligence model configured to generate code portions that are associated with data operation errors; in response to providing the first data operation error message to the fourth artificial intelligence model, receiving the executable code corresponding to the second data operation to be performed on the first dataset; and transmitting, based on a second entity identifier, the executable code to an address associated with the second entity.
12. The method of claim 11, wherein the fourth artificial intelligence model comprises a Large Language Model (LLM), and wherein the LLM is trained, the training comprising: obtaining a set of training error messages associated with data operations and a set of historical data operation logs associated with the set of training error messages, wherein each training error message indicates an occurred error associated with a respective data operation, and wherein each historical data operation log of the set of historical data operation logs indicate (i) a computing language in which a respective data operation was written and (ii) a third data operation that was performed to resolve a respective occurred error; providing the set of training error messages and the set of historical data operation logs associated with the set of training error messages to the LLM during a training routine, the LLM being communicatively coupled to a retrieval component configured to retrieve executable code portions that correspond to historically performed data operations; receiving, from the LLM during the training routine, a set of candidate executable code portions, wherein each candidate executable code portion of the set of candidate executable code portions are associated with an error message characteristic of a respective training error message of the set of training error messages; and in response to receiving the set of candidate executable code portions, providing a message, during the training routine, to the LLM comprising an accuracy value corresponding to each candidate executable code portion of the set of candidate executable code portions.
13. The method of claim 2, wherein the first dataset is associated with a physical data model that is different than that of the physical data model of the second entity.
16. The method of claim 2, wherein the first data operation is a data transfer operation, the data transfer operation being a data transfer of the first dataset from a first data repository associated with the first entity to a second data repository.
17. A non-transitory, computer-readable medium comprising instructions recorded thereon that, when executed by one or more processors, cause operations comprising:
receiving a request to perform a first data operation on a first dataset from a first data source of a first entity, wherein the first data operation (i) uses a logical data model to perform the first data operation on the first dataset and (ii) involves a physical data model of a second entity;
in response to receiving the request, identifying, based on a first dataset description of the first dataset, a first logical data model to be used in connection with performing the first data operation on the first dataset;
determining a first supplemental data structure for the first logical data model, wherein the first supplemental data structure is expressed in a standardized language;
in response to performing the first data operation on the first dataset using the first supplemental data structure, receiving a first data operation message associated with the second entity; and
transmitting, to the second entity, executable code corresponding to a second data operation to be performed on the first dataset based on the first data operation message.
20. The non-transitory, computer-readable medium of claim 17, wherein using the first supplemental data structure to perform the first data operation comprises generating a first mapping, based on the first logical data model, for performing the first data operation on the first dataset, wherein the first mapping maps aa first attribute of the first supplemental data structure to the physical data model of the second entity.
18. The non-transitory, computer-readable medium of claim 17, wherein identifying the first logical data model further comprises: providing the first dataset description of the first dataset as input to a first artificial intelligence model trained to identify logical data models to perform data operations on datasets; receiving, from the first artificial intelligence model, a ranked set of logical data models, wherein each ranked logical data model of the ranked set of logical data models are ranked based on a confidence value indicating a likelihood that the first dataset uses a respective ranked logical data model of the ranked set of logical data models; and identifying the first logical data model based on a selection of a respective logical data model that satisfies a threshold confidence value from the ranked set of logical data models.
19. The non-transitory, computer-readable medium of claim 17, wherein determining the first supplemental data structure for the first logical data model further comprises: providing an identifier associated with the first logical data model as input to a second artificial intelligence model configured to determine supplemental data structures for logical data models; and receiving, from the second artificial intelligence model, the first supplemental data structure for the first logical data model, wherein the first supplemental data structure comprises a first transformer lineage of the first logical data model.
1. A system for resolving corrupted datasets transferred to data repositories having differing physical data models using programming language-agnostic data modeling platforms, the system comprising: one or more processors; and
a non-transitory, computer-readable medium comprising instructions recorded thereon that, when executed by the one or more processors, cause operations comprising:
receiving a data transfer request to perform a transfer of a dataset from a first local data repository to a second local data repository, wherein the first local data repository is associated with a first physical data model of a first entity, and wherein the second local data repository is associated with a second physical data model of a second entity that is different from the first physical data model;
in response to receiving the data transfer request, identifying, based on a dataset description of the dataset, a first logical data model to be used in connection with performing the transfer of the dataset from the first local data repository to the second local data repository;
determining a first supplemental data structure for the identified first logical data model, wherein the first supplemental data structure is expressed in a standardized language and comprises a first attribute; generating a first mapping, based on the identified first logical data model and the second physical data model of the second local data repository, for performing the transfer of the dataset from the first local data repository to the second local data repository, wherein the first mapping maps the first attribute of the first supplemental data structure to the second physical data model of the second local data repository;
performing the transfer of the dataset from the first local repository to the second local repository based on the first mapping;
in connection with performing the transfer of the dataset, receiving a data transfer error message from the second entity that is associated with the second local data repository indicating (i) an identified transferred dataset error that occurred during the transfer of the dataset and (ii) the data transfer request; and
in response to receiving the data transfer error message, transmitting executable code to the second entity that is associated with the second local data repository, wherein the executable code corresponds to a data analytic operation to be performed on the transferred dataset to resolve the identified transferred dataset error.
2. A method for resolving corrupted datasets using programming language-agnostic data modeling platforms, the method comprising: receiving a request to perform a first data operation on a first dataset from a first data source of a first entity, wherein the first data operation (i) uses a logical data model to perform the first data operation on the first dataset and (ii) involves a physical data model of a second entity;
in response to receiving the request, identifying, based on a first dataset description of the first dataset, a first logical data model to be used in connection with performing the first data operation on the first dataset;
determining a first supplemental data structure for the identified first logical data model, wherein the first supplemental data structure is expressed in a standardized language and comprises a first attribute; generating a first mapping, based on the identified first logical data model, for performing the first data operation on the first dataset, wherein the first mapping maps the first attribute of the first supplemental data structure to the physical data model of the second entity;
in response to performing the first data operation on the first dataset that is based on the first mapping, receiving a first data operation error message associated with the second entity that indicates an identified error that occurred during a performance of the first data operation on the first dataset, wherein the first data operation is performed; and transmitting, to the second entity, executable code corresponding to a second data operation to be performed on the first dataset to resolve the identified error.
3. The method of claim 2, wherein identifying the first logical data model further comprises: providing the first dataset description of the first dataset as input to a first artificial intelligence model trained to identify logical data models to perform data operations on datasets; receiving, from the first artificial intelligence model, a ranked set of logical data models, wherein each ranked logical data model of the ranked set of logical data models are ranked based on a confidence value indicating a likelihood that the first dataset uses a respective ranked logical data model of the ranked set of logical data models; and identifying the first logical data model based on a selection of a respective logical data model that satisfies a threshold confidence value from the set of ranked logical data models.
4. The method of claim 3, wherein the first artificial intelligence model comprises an Large Language Model (LLM), and wherein the LLM is trained, the training of the LLM comprising: obtaining a set of training datasets and a set of training logical data model descriptions, wherein each training dataset of the set of training datasets corresponds to a training logical data model description of the set of training logical data model descriptions, and wherein each training logical data model description of the set of training logical data model descriptions is associated with a metadata schema; providing the set of training datasets and the set of training logical data model descriptions to the LLM during a training routine, the LLM being communicatively coupled to a retrieval component configured to retrieve (i) similar logical data models historically used in connection with a dataset and (ii) metadata schemas associated with the respective similar logical data models to be provided to the LLM; receiving, from the LLM during the training routine, a set of candidate logical data models and corresponding metadata schemas based on (i) the similar logical data models historically used in connection with a dataset and (ii) the metadata schemas associated with the respective similar logical data models; and in response to receiving the set of candidate logical data models and the corresponding metadata schemas, providing a message, during the training routine, to the LLM comprising an accuracy value corresponding to each candidate logical data model and corresponding metadata schemas.
5. The method of claim 2, further comprising: extracting, from the request to perform the first data operation on the first dataset from the first data source of the first entity, an identifier associated with the first dataset; obtaining, based on the extracted identifier, the first dataset from a data repository storing datasets; and determining the first dataset description of the first dataset based on metadata associated with the first dataset.
6. The method of claim 2, wherein determining the first supplemental data structure for the identified first logical data model further comprises: providing an identifier associated with the first logical data model as input to a second artificial intelligence model configured to determine supplemental data structures for logical data models; and receiving, from the second artificial intelligence model, the first supplemental data structure for the identified first logical data model, wherein the first supplemental data structure is expressed in the standardized language and comprises the first attribute, and wherein the first attribute comprises a first transformer lineage of the first logical data model.
7. The method of claim 6, wherein the second artificial intelligence model comprises a transformer model, and wherein the transformer model is trained, the training of the transformer model comprising: obtaining (i) a set of training logical data model descriptions and (ii) a training set of supplemental data structures expressed in a standardized language each comprising a second attribute, wherein each training logical data model description of the set of training logical data model descriptions is associated with a metadata schema, and wherein the second attribute of each supplemental data structure of the set of supplemental data structures comprises a second transformer lineage of a training logical data model corresponding to a respective training logical data model description of the set of logical data model descriptions; providing the set of training logical data model descriptions and the training set of supplemental data structures as input to the transformer model during a self-supervised training routine; and generating, during the self-supervised training routine, a set of candidate supplemental data structures expressed in a standardized language each comprising a third attribute, wherein the third attribute of each candidate supplemental data structure of the set of candidate supplemental data structures comprises a third transformer lineage of a respective training logical data model of the set of training logical data models corresponding to a respective training logical data model description of the set of logical data model descriptions.
8. The method of claim 2, wherein generating the first mapping further comprises: receiving a message associated with the physical data model of the second entity; providing (i) the identified first logical data model and (ii) the message associated with the physical data model of the second entity to a third artificial intelligence model trained to generate mappings between logical data models and physical data models, wherein the third artificial intelligence model is communicatively coupled to a retrieval component configured to retrieve supplemental data structures of logical data models; and receiving, from the third artificial intelligence model, the first mapping for performing the first data operation on the first dataset, wherein the first mapping maps the first attribute of the first supplemental data structure to the physical data model of the second entity.
9. The method of claim 8, wherein the third artificial intelligence model comprises a Large Language Model (LLM), and wherein the LLM is trained, the training of the LLM comprising: obtaining (i) a set of training logical data model descriptions, (ii) a set training physical data models, and (iii) a set of training mappings, wherein the set of training mappings are based on training supplemental data structures corresponding to a respective training logical data model description of the set of training logical data model descriptions, and wherein the set of training mappings maps a training attribute of a training supplemental data structure to a training physical data model; providing, (i) the set of training logical data model descriptions, (ii) the set of training physical data models, and (iii) the set of training mappings, to the LLM during a training routine, wherein the LLM is communicatively coupled to the retrieval component, and wherein during the training routine the LLM retrieves, from the retrieval component, a set of candidate supplemental data structures respective to a set of training logical data models corresponding to the set of training logical data model descriptions; receiving, from the LLM, during the training routine, a set of candidate mappings, wherein each candidate mapping of the set of candidate mappings maps a second training attribute of a training supplemental data structure to respective training physical data model of the set of training physical data models; and in response to receiving the set of candidate mappings, providing a second message, during the training routine, to the LLM comprising an accuracy value corresponding to each candidate mapping of the set of candidate mappings.
10. The method of claim 2, further comprising: providing the first data operation error message to a fourth artificial intelligence model configured to generate code portions that are associated with data operation errors; in response to providing the first data operation error message to the fourth artificial intelligence model, receiving the executable code corresponding to the second data operation to be performed on the first dataset; and transmitting, based on a second entity identifier, the executable code to an address associated with the second entity.
11. The method of claim 10, wherein the fourth artificial intelligence model comprises a Large Language Model (LLM), and wherein the LLM is trained, the training comprising: obtaining a set of training error messages associated with data operations and a set of historical data operation logs associated with the set of training error messages, wherein each training error message indicates an occurred error associated with a respective data operation, and wherein each historical data operation log of the set of historical data operation logs indicate (i) a computing language that the data operation was written in and (ii) a third data operation that was performed to resolve a respective occurred error; providing the set of training error messages and the set of historical data operation logs associated with the set of training error messages to the LLM during a training routine, the LLM being communicatively coupled to a retrieval component configured to retrieve executable code portions that correspond to historically performed data operations; receiving, from the LLM during the training routine, a set of candidate executable code portions, wherein each candidate executable code portion of the set of candidate executable code portions are associated with an error message characteristic of a respective training error message of the set of training error messages; and in response to receiving the set of candidate executable code portions, providing a message, during the training routine, to the LLM comprising an accuracy value corresponding to each candidate executable code portion of the set of candidate executable code portions.
12. The method of claim 2, wherein the first dataset is associated with a physical data model that is different than that of the physical data model of the second entity.
15. The method of claim 2, wherein the first data operation is a data transfer operation of the first dataset, the data transfer operation being a data transfer from a first data repository associated with the first entity to a second data repository associated with the second entity.
16. A non-transitory, computer-readable medium comprising instructions recorded thereon that, when executed by one or more processors, cause operations comprising: receiving a request to perform a first data operation on a first dataset from a first data source of a first entity, wherein the first data operation (i) uses a logical data model to perform the first data operation on the first dataset and (ii) involves a physical data model of a second entity;
in response to receiving the request, identifying, based on a first dataset description of the first dataset, a first logical data model to be used in connection with performing the first data operation on the first dataset;
determining a first supplemental data structure for the identified first logical data model, wherein the first supplemental data structure is expressed in a standardized language and comprises a first attribute; generating a first mapping, based on the identified first logical data model, for performing the first data operation on the first dataset, wherein the first mapping maps the first attribute of the first supplemental data structure to the physical data model of the second entity;
in response to performing the first data operation on the first dataset that is based on the first mapping, receiving a first data operation error message associated with the second entity that indicates an identified error that occurred during a performance of the first data operation on the first dataset; and transmitting, to the second entity, executable code corresponding to a second data operation to be performed on the first dataset to resolve the identified error.
17. The non-transitory, computer-readable medium of claim 16, wherein identifying the first logical data model further comprises: providing the first dataset description of the first dataset as input to a first artificial intelligence model trained to identify logical data models to perform data operations on datasets; receiving, from the first artificial intelligence model, a ranked set of logical data models, wherein each ranked logical data model of the ranked set of logical data models are ranked based on a confidence value indicating a likelihood that the first dataset uses a respective ranked logical data model of the ranked set of logical data models; and identifying the first logical data model based on a selection of a respective logical data model that satisfies a threshold confidence value from the set of ranked logical data models.
18. The non-transitory, computer-readable medium of claim 16, wherein determining the first supplemental data structure for the identified first logical data model further comprises: providing an identifier associated with the first logical data model as input to a second artificial intelligence model configured to determine supplemental data structures for logical data models; and receiving, from the second artificial intelligence model, the first supplemental data structure for the identified first logical data model, wherein the first supplemental data structure is expressed in the standardized language and comprises the first attribute, and wherein the first attribute comprises a first transformer lineage of the first logical data model.
19. The non-transitory, computer-readable medium of claim 16, wherein generating the first mapping further comprises: receiving a message associated with the physical data model of the second entity; providing (i) the identified first logical data model and (ii) the message associated with the physical data model of the second entity to a third artificial intelligence model trained to generate mappings between logical data models and physical data models, wherein the third artificial intelligence model is communicatively coupled to a retrieval component configured to retrieve supplemental data structures of logical data models; and receiving, from the third artificial intelligence model, the first mapping for performing the first data operation on the first dataset, wherein the first mapping maps the first attribute of the first supplemental data structure to the physical data model of the second entity.
20. The non-transitory, computer-readable medium of claim 16, wherein the instructions further cause operations comprising: providing the first data operation error message to a fourth artificial intelligence model configured to generate code portions that are associated with data operation errors; in response to providing the first data operation error message to the fourth artificial intelligence model, receiving the executable code corresponding to the second data operation to be performed on the first dataset; and transmitting, based on a second entity identifier, the executable code to an address associated with the second entity.
This is a nonstatutory double patenting rejection.
Examiner's Note
The Examiner respectfully requests of the Applicants in preparing responses, to fully consider the entirety of the references as potentially teaching all or part of the claimed invention.
It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. "The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain." In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275, 277 (CCPA 1968)). A reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill the art, including non-preferred embodiments (see MPEP 2123).
The Examiner has cited particular locations in the reference(s) as applied to the claims below for the convenience of the Applicants. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claims, typically other passages and figures will apply as well.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5-6, 8, 11, 13-17, and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ares (US 2020/0234155).
Regarding claim 1, Ares discloses a system for resolving corrupted datasets transferred to data repositories having differing physical data models using programming language-agnostic data modeling platforms, the system comprising: one or more processors; and a non-transitory, computer-readable medium comprising instructions recorded thereon that, when executed by the one or more processors (Figure 1), cause operations comprising:
receiving a data transfer request to perform a transfer of a dataset from a first data repository to a second data repository, wherein the first data repository is associated with a first physical data model of a first entity, and wherein the second data repository is associated with a second physical data model that is different from the first physical data model ([0070]-[0071], “[0070] Electronic data sources 150 may represent any electronic data storage 150A (e.g., local database, computing devices within an organization, cloud computing systems, third-party data storage systems, and homegrown data repositories). These storages may store customer interaction, system configuration, and interactions and other information related to all computing systems utilized via an organization. For instance, electronic data storage 150A may store data associated with monetary transfers between different branches and/or all teller transactions at a bank. [0071] The electronic data sources 150 may also include various devices configured to transmit data to the analytics server. For instance, the electronic data sources 150 may include ATM machines or other point-of-sale terminals 150B. The ATMS or point-of-sale terminals may include local databases and/or may directly transmit transaction data (e.g., customer information, transaction amount, transaction time) to the analytics server 110. The transmission of transaction data may be done in real-time or in batches on periodic basis. In some configurations, the analytics server 110 may retrieve transaction data at any time from one or more ATMS or point-of-sale terminal” and [0176]);
in response to receiving the data transfer request, identifying, based on a dataset description of the dataset, a first logical data model to be used in connection with performing the transfer of the dataset from the first data repository to the second data repository ([0070]-[0071], [0175]-[0176] and [0219], logical data model is created using various data structures);
determining a first supplemental data structure for the first logical data model, wherein the first supplemental data structure is expressed in a standardized language ([0218]-[0219], “A mental model is a data model (e.g., nodal structure) of a specific problem domain expressed independently of a particular database management product or storage technology but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags”);
performing the transfer of the dataset from the first data repository to the second data repository using the first supplemental data structure ([0070]-[0071], [0175]-[0176] and [0219]);
in connection with performing the transfer of the dataset, receiving a data transfer error message from a second entity that is associated with the second data repository ([0236], “The analytics server may also use the adapters to validate data. For instance, each data table may define a set of validation rules to determine if the data collected via an adapter is valid. For example, if certain required fields in a table are missing for several items (rows), the adapter may generate a message in the administrative console to inform an administrator of the analytics server that the data collected via a particular adapter is not valid or needs to be reviewed. The analytics server may also generate an automatic message and transmit the collected data (that is purportedly not valid) to the administrator's computer and display a prompt to the administrator requesting a second level review of the collected data”); and
in response to receiving the data transfer error message, transmitting executable code to the second entity that is associated with the second data repository, wherein the executable code corresponds to a data analytic operation to be performed on the dataset to resolve a dataset error ([0236]-[0238]).
Regarding claim 2, Ares discloses a method for resolving corrupted datasets using programming language-agnostic data modeling platforms, the method comprising:
receiving a request to perform a first data operation on a first dataset from a first data source of a first entity ([0070]-[0071], “[0070] Electronic data sources 150 may represent any electronic data storage 150A (e.g., local database, computing devices within an organization, cloud computing systems, third-party data storage systems, and homegrown data repositories). These storages may store customer interaction, system configuration, and interactions and other information related to all computing systems utilized via an organization. For instance, electronic data storage 150A may store data associated with monetary transfers between different branches and/or all teller transactions at a bank. [0071] The electronic data sources 150 may also include various devices configured to transmit data to the analytics server. For instance, the electronic data sources 150 may include ATM machines or other point-of-sale terminals 150B. The ATMS or point-of-sale terminals may include local databases and/or may directly transmit transaction data (e.g., customer information, transaction amount, transaction time) to the analytics server 110. The transmission of transaction data may be done in real-time or in batches on periodic basis. In some configurations, the analytics server 110 may retrieve transaction data at any time from one or more ATMS or point-of-sale terminal” and [0176]);
in response to receiving the request, identifying, based on a first dataset description of the first dataset, a first logical data model to be used in connection with performing the first data operation on the first dataset ([0070]-[0071], [0175]-[0176] and [0219], logical data model is created using various data structures);
determining a first supplemental data structure for the first logical data model, wherein the first supplemental data structure is expressed in a standardized language ([0218]-[0219], “A mental model is a data model (e.g., nodal structure) of a specific problem domain expressed independently of a particular database management product or storage technology but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags”);
performing the first data operation on the first dataset using the first supplemental data structure ([0070]-[0071], [0175]-[0176] and [0219]);
receiving a first data operation error message that indicates an error that occurred during performance of the first data operation on the first dataset ([0236], “The analytics server may also use the adapters to validate data. For instance, each data table may define a set of validation rules to determine if the data collected via an adapter is valid. For example, if certain required fields in a table are missing for several items (rows), the adapter may generate a message in the administrative console to inform an administrator of the analytics server that the data collected via a particular adapter is not valid or needs to be reviewed. The analytics server may also generate an automatic message and transmit the collected data (that is purportedly not valid) to the administrator's computer and display a prompt to the administrator requesting a second level review of the collected data”); and
transmitting, to a second entity, executable code corresponding to a second data operation to be performed on the first dataset to resolve the error ([0236]-[0238]).
Regarding claim 5, Ares discloses extracting, from the request to perform the first data operation on the first dataset from the first data source of the first entity, an identifier associated with the first dataset ([0013], [0076]-[0077], [0103]); obtaining, based on the identifier, the first dataset from a data repository storing datasets ([0191]-[0192]); and determining the first dataset description of the first dataset based on metadata associated with the first dataset ([0013], [0076]-[0077], metadata).
Regarding claim 6, Ares discloses wherein determining the first supplemental data structure for the first logical data model further comprises: providing an identifier associated with the first logical data model as input to a second artificial intelligence model configured to determine supplemental data structures for logical data models ([0114], [0218]-[0220]); and receiving, from the second artificial intelligence model, the first supplemental data structure for the first logical data model, wherein the first supplemental data structure comprises a first attribute, and wherein the first attribute comprises a first transformer lineage of the first logical data model ([0218]-[0220]).
Regarding claim 8, Ares discloses wherein using the first supplemental data structure to perform the first data operation comprises generating a first mapping, based on the first logical data model, for performing the first data operation on the first dataset, and wherein the first mapping maps a first attribute of the first supplemental data structure to a physical data model of a second entity ([0074]-[0077], map data, [0237]).
Regarding claim 11, Ares discloses providing the first data operation error message to a fourth artificial intelligence model configured to generate code portions that are associated with data operation errors ([0236]-[0238]); in response to providing the first data operation error message to the fourth artificial intelligence model, receiving the executable code corresponding to the second data operation to be performed on the first dataset ([0236]-[0238]); and transmitting, based on a second entity identifier, the executable code to an address associated with the second entity ([0236]-[0238]).
Regarding claim 13, Ares discloses wherein the first dataset is associated with a physical data model that is different than that of the physical data model of the second entity ([0070]-[0071], and [0076]).
Regarding claim 14, Ares discloses wherein the first data operation (i) uses a logical data model to perform the first data operation on the first dataset and (ii) involves a physical data model of a second entity ([0070]-[0071], [0175]-[0176] and [0219]).
Regarding claim 15, Ares discloses wherein the first data operation is a data removal operation, the data removal operation being a data removal of the first dataset from a first data repository associated with the first entity ([0158]-[0159]).
Regarding claim 16, Ares discloses wherein the first data operation is a data transfer operation, the data transfer operation being a data transfer of the first dataset from a first data repository associated with the first entity to a second data repository ([0070]-[0071], [0175]-[0176] and [0219]).
Regarding claim 17, Ares discloses a non-transitory, computer-readable medium comprising instructions recorded thereon that, when executed by one or more processors, cause operations (Figure 1) comprising:
receiving a request to perform a first data operation on a first dataset from a first data source of a first entity ([0070]-[0071], “[0070] Electronic data sources 150 may represent any electronic data storage 150A (e.g., local database, computing devices within an organization, cloud computing systems, third-party data storage systems, and homegrown data repositories). These storages may store customer interaction, system configuration, and interactions and other information related to all computing systems utilized via an organization. For instance, electronic data storage 150A may store data associated with monetary transfers between different branches and/or all teller transactions at a bank. [0071] The electronic data sources 150 may also include various devices configured to transmit data to the analytics server. For instance, the electronic data sources 150 may include ATM machines or other point-of-sale terminals 150B. The ATMS or point-of-sale terminals may include local databases and/or may directly transmit transaction data (e.g., customer information, transaction amount, transaction time) to the analytics server 110. The transmission of transaction data may be done in real-time or in batches on periodic basis. In some configurations, the analytics server 110 may retrieve transaction data at any time from one or more ATMS or point-of-sale terminal” and [0176]), wherein the first data operation (i) uses a logical data model to perform the first data operation on the first dataset and (ii) involves a physical data model of a second entity ([0070]-[0071], [0175]-[0176] and [0219], logical data model is created using various data structures);
in response to receiving the request, identifying, based on a first dataset description of the first dataset, a first logical data model to be used in connection with performing the first data operation on the first dataset ([0070]-[0071], [0175]-[0176] and [0219]);
determining a first supplemental data structure for the first logical data model, wherein the first supplemental data structure is expressed in a standardized language ([0218]-[0219], “A mental model is a data model (e.g., nodal structure) of a specific problem domain expressed independently of a particular database management product or storage technology but in terms of data structures such as relational tables and columns, object-oriented classes, or XML tags”);
in response to performing the first data operation on the first dataset using the first supplemental data structure, receiving a first data operation message associated with the second entity ([0236], “The analytics server may also use the adapters to validate data. For instance, each data table may define a set of validation rules to determine if the data collected via an adapter is valid. For example, if certain required fields in a table are missing for several items (rows), the adapter may generate a message in the administrative console to inform an administrator of the analytics server that the data collected via a particular adapter is not valid or needs to be reviewed. The analytics server may also generate an automatic message and transmit the collected data (that is purportedly not valid) to the administrator's computer and display a prompt to the administrator requesting a second level review of the collected data”); and
transmitting, to the second entity, executable code corresponding to a second data operation to be performed on the first dataset based on the first data operation message ([0236]-[0238]).
Regarding claim 19, Ares discloses wherein determining the first supplemental data structure for the first logical data model further comprises: providing an identifier associated with the first logical data model as input to a second artificial intelligence model configured to determine supplemental data structures for logical data models ([0114], [0218]-[0220]); and receiving, from the second artificial intelligence model, the first supplemental data structure for the first logical data model, wherein the first supplemental data structure comprises a first transformer lineage of the first logical data model ([0114], [0218]-[0220]).
Regarding claim 20, Ares discloses wherein using the first supplemental data structure to perform the first data operation comprises generating a first mapping, based on the first logical data model, for performing the first data operation on the first dataset, wherein the first mapping maps aa first attribute of the first supplemental data structure to the physical data model of the second entity ([0074]-[0077], map data, [0237]).
Allowable Subject Matter
Claims 3-4, 7, 9-10, 12 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if overcome the nonstatutory double patenting rejection and rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Oberhofer (US 2021/00442358) discloses method for dynamic data blocking in a database system.
Wong (US 2016/0267082) discloses systems and methods for managing data.
Liu (US 2016/0019289) discloses managing multiple data models over data storage system.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MERILYN P NGUYEN whose telephone number is 571-272-4026. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kavita Stanley can be reached on (571) 272-8352. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197.
/MERILYN P NGUYEN/ Primary Examiner, Art Unit 2153
March 6, 2026