Prosecution Insights
Last updated: April 19, 2026
Application No. 18/057,874

DYNAMICALLY EXECUTING DATA SOURCE AGNOSTIC DATA PIPELINE CONFIGURATIONS

Final Rejection §101§103
Filed
Nov 22, 2022
Examiner
RIGGINS, ARI FAITH COLEMA
Art Unit
2197
Tech Center
2100 — Computer Architecture & Software
Assignee
Chime Financial Inc.
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 1 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
27.8%
-12.2% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to claims filed on 10/23/2025. Claims 1-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention recites a judicial exception, is directed to that judicial exception, an abstract idea, as it has not been integrated into practical application and the claims further do not recite significantly more than the judicial exception. Examiner has evaluated the claims under the framework provided in the 2019 Patent Eligibility Guidance published in the Federal Register 01/07/2019 and has provided such analysis below. Step 1: Claims 1-9 are directed to a method and fall within the statutory category of process. Claims 10-15 are directed to a non-transitory computer-readable medium and fall within the statutory category of article of manufacture. Claims 16-20 are directed to a system and fall within the statutory category of machine. Therefore, “Are the claims to a process, machine, manufacture or composition of matter?” Yes. In order to evaluate the Step 2A inquiry “Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?” we must determine, at Step 2A Prong 1, whether the claim recites a law of nature, a natural phenomenon or an abstract idea and further whether the claim recites additional elements that integrate the judicial exception into a practical application. Step 2A Prong 1: Claims 1, 10, and 16: The limitations of ““identify(ing), from a data pipeline job configuration, a set of instructions for a multi-service data pipeline framework, wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework to represent one or more requests for a first data source tagged with a first data source identifier and one or more additional requests for a second data source tagged with a second data source identifier”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe a data pipeline job configuration, and based on these observations, can mentally identify a set of instructions for a multi-service data pipeline framework. Further, the limitations of “utilize(utilizing) the first data source identifier for the first data source to select a first connector for the first data source;” and “utilize(utilizing) the second data source identifier for the second data source to select a second connector for the second data source”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe an identifier, and based on these observations, can mentally select a connector for a data source. Further, the limitations of “mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector;” and “mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector”, as drafted, is a process that, but for the recitation of generic computing components, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally map one or more requests to native code commands. This may also be done with pencil and paper. Therefore, Yes, claims 1, 10, and 16 recite a judicial exception. Step 2A Prong 2: Claims 1, 10, and 16: The judicial exception is not integrated into a practical application. In particular, the Claims recite the following additional elements – “A computer-implemented method comprising: ”, “A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computing device to: ”, and “A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system to:” which are merely recitations of using a computer as a tool to apply the abstract idea (MPEP § 2106.05(f)) which does not integrate a judicial exception into practical application. Further, the claims recite the following additional elements – “and read(ing) or write(writing) data in relation to the first data source based on the one or more requests”, “and reading or writing the data in relation to the first data source utilizing the mapped native code commands”, “and read(ing) or write(writing) data additional data in relation to the second data source based on the one or more additional requests”, and “and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands”, which is merely a recitation of data gathering and data storage which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Step 2B: Claims 1, 10, and 16: The claims do not include additional elements, alone or in combination, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than using generic computing components as a tool to apply the abstract idea and insignificant extra solution activity which do not amount to significantly more than the abstract idea. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, “Do the claims recite additional elements that amount to significantly more than the judicial exception? No, these additional elements, alone or in combination, do not amount to significantly more than the judicial exception. Having concluded analysis within the provided framework, Claims 1, 10, and 16 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 2, 11, and 17, the claims recite additional element recitations of “wherein the first data source comprises an input data source”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite the following additional elements – “read(ing) the data from the input data source utilizing the native code commands determined from the one or more requests”, which are merely recitations of data gathering and data storage which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, claims 2, 11, and 17 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 2, 11, and 17 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, Claims 2, 11, and 17 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 3 and 12, the claims recite additional elements – “write(writing) the data from the first data source to a target data source identified from the data pipeline job configuration”, which is merely a recitation of data storage which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, claims 3 and 12 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 3 and 12 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, Claims 3 and 12 do not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 4 and 13, the claims recite additional abstract idea recitations of “modify(ing) the data from the input data source utilizing the native code commands determined from the one or more requests”, as drafted, is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe and evaluate one or more requests to determine native code commands and based on this evaluation, a person can mentally modify data from an input data source. This may also be done with pencil and paper. Further, the claims recite additional element recitations of “wherein the first data source comprises an input data source”, which are merely recitations of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 4 and 13 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 4 and 13 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 4 and 13 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 5, 14, and 18, the claims recite additional element recitations of “wherein the first data source comprises a target data source”, which is merely a recitation of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, the claims recite the following additional element – “writing the data, identified from the data pipeline job configuration, to the target data source using the native code commands determined from the one or more requests”, which is merely a recitation of data storage which is insignificant extra solution activity (see MPEP §2106.05(g)) which does not integrate a judicial exception into practical application. Further, claims 5, 14, and 18 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 5, 14, and 18 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Further, the insignificant extra solution activity is well-understood, routine, and conventional in the art. “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network…iv. Storing and retrieving information in memory” [MPEP§ 2106.05(d)(II)]. Therefore, Claims 5, 14, and 18 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 6, the claim recites additional element recitations of “wherein the data pipeline job configuration comprises one or more tags for the first connector of the first data source, scheduling settings, monitoring requests, alerting requests, watermarking requests, access permission settings, or output file identifiers” which is merely a recitation of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claim 6 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 6 also fails both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 6 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claim 7, the claim recites additional abstract idea recitations of “and further comprising identifying a data source request type, wherein the data source request type comprises an input request or an output request”, as drafted, is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can observe and evaluate a data source request type and based on this evaluation can identify the request type and if it comprises an input request or output request. Further, the claim recites the following additional element – “wherein the first data source identifier indicates a selection or name of the first data source” which is merely a recitation of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claim 7 does not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claim 7 also fails both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fails Step 2B as not amounting to significantly more. Therefore, Claim 7 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 8, 15, and 19, the claims recite additional abstract idea recitations of “map(ping) the one or more requests to the native code commands for the first data source through the first connector by converting the one or more requests to a programming language recognized by a computer network of the first data source”, as drafted, is a process that under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a person can mentally convert one or more requests to a programming language recognized by a computer network and through this conversion can mentally map requests to native code commands. This may also be done with pencil and paper. Further, claims 8, 15, and 19 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 8, 15, and 19 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 8, 15, and 19 does not recite patent eligible subject matter under 35 U.S.C. § 101. With regard to claims 9 and 20, the claims recite additional element recitations of “wherein: the one or more requests comprise one or more graphical user interface selectable options” which is merely a recitation of technological environment/field of use (see MPEP § 2106.05(h)) which does not integrate a judicial exception into practical application. Further, claims 9 and 20 do not recite any further additional elements and for the same reasons as above with regard to integration into practical application and whether additional elements amount to significantly more, claims 9 and 20 also fail both Step 2A prong 2, thus the claim is directed to the judicial exception as it has not been integrated into practical application, and fail Step 2B as not amounting to significantly more. Therefore, Claims 9 and 20 do not recite patent eligible subject matter under 35 U.S.C. § 101. Therefore, Claims 1-20 do not recite patent eligible subject matter under U.S.C. §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 8-20 are rejected under 35 U.S.C. 103 as being unpatentable over Raj (US 2024/0160422 A1) in view of Bendelac (US 2022/0269552 A1) in view of Cheng (US 2008/0016504 A1). With regard to claim 1, Raj teaches: A computer-implemented method comprising: identifying, from a data pipeline job configuration, “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality (data pipeline jobs)” [Raj ¶ 25]. a set of instructions for a multi-service data pipeline framework, “Aspects of the disclosure relate to computer systems that provide effective, efficient, scalable, and convenient ways of securely and uniformly managing how internal computer systems exchange information with external computer systems to provide and/or support different products and services offered by an organization (e.g., a financial institution, and the like)” [Raj ¶ 6]. to represent one or more requests for a first data source “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality” [Raj ¶ 25]. … a first data source identifier “At 310, the parsing engine 210 may parse or otherwise analyze one or more source files 205 containing source code to be translated. The parsing engine 210 may identify a language or a dialect of the language and may generate an input file for the transformation engine 230” [Raj ¶ 43]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. … a second data source “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality. In some cases, the source computing system 122 and/or the destination computing system 123 may be configured to communicate with one or more of the application computing systems 108 such as via direct communications and/or API function calls and the services” [Raj ¶ 25]. “Aspects of the disclosure relate to computer systems that provide effective, efficient, scalable, and convenient ways of securely and uniformly managing how internal computer systems exchange information with external computer systems to provide and/or support different products and services offered by an organization (e.g., a financial institution, and the like)” [Raj ¶ 6 Examiner notes any of the computing systems and services can be considered a second data source]. … a second data source identifier; “At 310, the parsing engine 210 may parse or otherwise analyze one or more source files 205 containing source code to be translated. The parsing engine 210 may identify a language or a dialect of the language and may generate an input file for the transformation engine 230” [Raj ¶ 43]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. “The transpilation system may provide an automated process to move programmed functionalities between technology platforms having disparate programming languages or different dialects of programming languages” [Raj ¶ 44 Examiner notes Raj describes its management process such that it applies to each data source]. reading or writing data in relation to the first data source based on the one or more requests “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. by: utilizing the first data source identifier for the first data source to select a first connector for the first data source; “At 330, the transformation engine 230 may process the source AST file 214 based on information received from the NLP 222 of the language learning engine 220. For example, the language learning engine 220 may provide, based on an indication of the source language or dialect and an indication of the destination language or dialect, identification of words and/or phrases indicative of a particular language or dialect used for programming a particular application and/or for queries within a particular data repository environment (first data source)” [Raj ¶ 43]. “In an illustrative example, a transpilation mapping (connector) from programming language A to programming language B and a transpilation mapping (connector) from programming language A to programming language C may be learned by the transpilation system” [Raj ¶ 44]. and reading or writing the data in relation to the first data source “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. and reading or writing additional data in relation to the second data source based on the one or more additional requests “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. by: utilizing the second data source identifier for the second data source to select a second connector for the second data source; “At 330, the transformation engine 230 may process the source AST file 214 based on information received from the NLP 222 of the language learning engine 220. For example, the language learning engine 220 may provide, based on an indication of the source language or dialect and an indication of the destination language or dialect, identification of words and/or phrases indicative of a particular language or dialect used for programming a particular application and/or for queries within a particular data repository environment (second data source)” [Raj ¶ 43]. “In an illustrative example, a transpilation mapping (connector) from programming language A to programming language B and a transpilation mapping (connector) from programming language A to programming language C may be learned by the transpilation system” [Raj ¶ 44]. and reading or writing the additional data in relation to the second data source “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. Raj fails to teach wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework … tagged with a first data source identifier and one or more additional requests for a second data source tagged with a second data source identifier; mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector. However, Bendelac teaches: wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework “An example method includes: receiving, from a client application, a request for data for at least one entity, wherein the request includes a first qualified identifier that includes a first system tenant qualifier and a first local identifier, wherein the first system tenant qualifier identifies a first system tenant in a multi-system tenant landscape, the first local identifier identifies an entity instance of a first entity in the first system tenant, and the request is based on a unified data model that represents commonality of respective data models used by multiple system tenants in the multi-system tenant landscape …” [Bendelac ¶ 3]. tagged with a first data source identifier “The request for data can include one or more qualified identifiers that each include a local identifier and a system tenant prefix (or other type of qualifier) corresponding to a particular data system tenant 505” [Bendelac ¶ 46]. and one or more additional requests for a second data source “The request can be for data for more than entity and multiple sub-requests for entity data can be identified in the request. Determining a routing policy for each sub-request can include determining a target system tenant for each sub-request. The sub-requests can be provided to the respective target system tenants” [Bendelac ¶ 4]. “For instance, the system 200 includes a first data system tenant 202, a second data system tenant 204, and a third data system tenant 206 in a multi-system landscape 208 of a particular organization. An organization may use multiple different systems (and/or system tenants) for managing the organization, with different systems or system tenants providing certain features that may be specialized for different aspects of the organization” [Bendelac ¶ 27]. tagged with a second data source identifier; “The request for data can include one or more qualified identifiers that each include a local identifier and a system tenant prefix (or other type of qualifier) corresponding to a particular data system tenant 505” [Bendelac ¶ 46]. mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system or software product may have been engineered by a different team of engineers, and may have different APIs and/or different data models, even for similar entities” [Bendelac ¶ 30]. mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system tenant 505 can include a data layer 510 that provides access (e.g., using an API 512) to a data repository 514 stored and/or managed by the data system tenant 505” [Bendelac ¶ 44]. Bendelac is considered to be analogous to the claimed invention because it is in the same field of interprogram communication. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raj to incorporate the teachings of Bendelac and include wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework … tagged with a first data source identifier and one or more additional requests for a second data source tagged with a second data source identifier; mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector. Doing so would allow for efficiency in mapping code commands between different data sources through the use of a unified API. “Rather than having each developer and each client application, such as a client application 303, deal with various complexities of accessing data from multiple systems in a multi-system landscape 304, the unified API layer 302 can provide an abstraction layer to client applications to shield them from complexities of understanding differences between multiple systems or tenants …” [Bendelac ¶ 37]. Raj in view of Bendelac fails to explicitly teach mapping the one or more requests from the unified request format to native code commands for the first data source; and reading or writing the data in relation to the first data source utilizing the mapped native code commands … mapping the one or more additional requests from the unified request format to additional native code commands for the second data source, and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. However, Cheng teaches: mapping the one or more requests from the unified request format to native code commands for the first data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. and reading or writing the data in relation to the first data source utilizing the mapped native code commands “In one embodiment, during operation of the resulting client-server application, the native code client application sends or receives data to/from the server … The server application runs a different code such as Java. The client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 35]. mapping the one or more additional requests from the unified request format to additional native code commands for the second data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. Cheng is considered to be analogous to the claimed invention because it is in the same field of interprogram communication. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raj in view of Bendelac to incorporate the teachings of Cheng and include mapping the one or more requests from the unified request format to native code commands for the first data source; and reading or writing the data in relation to the first data source utilizing the mapped native code commands … mapping the one or more additional requests from the unified request format to additional native code commands for the second data source, and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. Doing so would allow for different data sources to operate using their native commands. “The client application is code native to the mobile device (such as assembly code for ARM, MIP, or 80x86 processor) and is executed by the processor of the mobile device. The formats of the application code objects and related messages are represented in the native language to that device (Java, Brew, or MSC#, for example). The server application runs a different code such as Java” [Cheng ¶ 35]. With regard to claim 2, Raj in view of Bendelac in view of Cheng teaches the computer-implemented method of claim 1, as referenced above. Raj further teaches: wherein the first data source comprises an input data source “In one or more arrangements, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 (input data source), the destination computing system 123, the client computing system 120, the user computing devices 110, and/or the other devices/systems in the computing environment 100 may be any type of computing device capable of receiving input via a user interface, and communicating the received input to one or more other computing devices in the computing environment 100” [Raj ¶ 30]. and further comprising reading the data from the input data source “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. Raj in view of Bendelac fails to explicitly teach utilizing the native code commands determined from the one or more requests. However, Cheng teaches utilizing the native code commands determined from the one or more requests. “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 3, Raj in view of Bendelac in view of Cheng teaches the computer-implemented method of claim 1, as referenced above. Raj further teaches: further comprising writing the data from the first data source to a target data source identified from the data pipeline job configuration. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. “In some cases, the system 200 may be used to perform a method 300 of code migration from a first computing platform (e.g., a source computing platform) to a second computing platform (e.g., a destination computing platform, a target computing platform, or the like)” [Raj ¶ 39]. With regard to claim 4, Raj in view of Bendelac in view of Cheng teaches the computer-implemented method of claim 1, as referenced above. Raj further teaches: wherein the first data source comprises an input data source “In one or more arrangements, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 (input data source), the destination computing system 123, the client computing system 120, the user computing devices 110, and/or the other devices/systems in the computing environment 100 may be any type of computing device capable of receiving input via a user interface, and communicating the received input to one or more other computing devices in the computing environment 100” [Raj ¶ 30]. Raj fails to explicitly teach and further comprising modifying the data from the input data source utilizing the native code commands determined from the one or more requests. However, Bendelac teaches further comprising modifying the data from the input data source utilizing the native code commands determined from the one or more requests. “The unified API layer 606 can perform various translations when translating the query 604 to the translated query 612. The system tenant prefix in the qualified identifier 610 is generally not needed (nor understood) by the system tenant 611, so the unified API layer 606 can remove (e.g., strip off) the system tenant prefix and only include the local identifier portion of the qualified identifier 610 in the translated query 612 (as illustrated by a local identifier value '10' 616 included in the translated query 612)” [Bendelac ¶ 67]. Raj in view of Bendelac fails to explicitly teach utilizing the native code commands. However, Cheng teaches utilizing the native code commands “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 5, Raj in view of Bendelac in view of Cheng teaches the computer-implemented method of claim 1, as referenced above. Raj further teaches: wherein the first data source comprises a target data source “In some cases, the system 200 may be used to perform a method 300 of code migration from a first computing platform (e.g., a source computing platform) to a second computing platform (e.g., a destination computing platform, a target computing platform, or the like). For example, a data repository may be migrated from a source computing platform to a destination computing platform based on a change in database vendors for the enterprise organization, an update to an existing data repository product, an update to a new product version or new product that supports a different programming language, and/or the like” [Raj ¶ 39]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. “The system of claim 1, wherein the instructions cause the transpilation platform to automatically train the ML transpilation model based on documentation describing operations of the source programming language and the target programming language” [Raj Claim 5]. and further comprising writing the data, identified from the data pipeline job configuration, to the target data source using the native code commands determined from the one or more requests. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. “In some cases, the system 200 may be used to perform a method 300 of code migration from a first computing platform (e.g., a source computing platform) to a second computing platform (e.g., a destination computing platform, a target computing platform, or the like)” [Raj ¶ 39]. Raj in view of Bendelac fails to explicitly teach utilizing the native code commands. However, Cheng teaches utilizing the native code commands “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 8, Raj in view of Bendelac in view of Cheng teaches the computer-implemented method of claim 1, as referenced above. Raj further teaches by converting the one or more requests to a programming language recognized by a computer network of the first data source. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. Raj fails to explicitly teach further comprising mapping the one or more requests to the native code commands for the first data source through the first connector. However, Bendelac further teaches further comprising mapping the one or more requests to the native code commands for the first data source through the first connector “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system or software product may have been engineered by a different team of engineers, and may have different APIs and/or different data models, even for similar entities” [Bendelac ¶ 30]. Raj in view of Bendelac fails to explicitly teach further comprising mapping the one or more requests to the native code commands for the first data source. However, Cheng teaches further comprising mapping the one or more requests to the native code commands for the first data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 9, Raj in view of Bendelac in view of Cheng teaches the computer-implemented method of claim 1, as referenced above. Raj further teaches wherein: the one or more requests comprise one or more graphical user interface selectable options. “In one or more arrangements, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122, the destination computing system 123, the client computing system 120, the user computing devices 110, and/or the other devices/systems in the computing environment 100 may be any type of computing device capable of receiving input via a user interface, and communicating the received input to one or more other computing devices in the computing environment 100” [Raj ¶ 30]. “As used throughout this disclosure, computer-executable "software and data" can include one or more: algorithms, applications, application program interfaces (APIs), attachments, big data, daemons, emails, encryptions, databases, datasets, drivers, data structures, file systems or distributed file systems, firmware, graphical user interfaces, images, instructions, machine learning (e.g., supervised, semi-supervised, reinforcement, and unsupervised), middleware, modules, objects, operating systems, processes, protocols, programs, scripts, tools, and utilities” [Raj ¶ 17]. Raj fails to explicitly teach one or more graphical user interface selectable options. However, Bendelac teaches one or more graphical user interface selectable options. “The GUI 554 may comprise a plurality of customizable frames or views having interactive fields, pull-down lists, and buttons operated by the user. The GUI 554 contemplates any suitable graphical user interface, such as a combination of a generic web browser, intelligent engine, and command line interface (CLI) that processes information and efficiently presents the results to the user visually” [Bendelac ¶ 60]. With regard to claim 10, Raj teaches: A non-transitory computer-readable medium storing instructions that, when executed by at least one processor, cause a computing device to: “…memory storing computer-readable instructions that, when executed by the at least one processor, cause the transpilation platform to:” [Raj Claim 1]. identify, from a data pipeline job configuration, “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality (data pipeline jobs)” [Raj ¶ 25]. a set of instructions for a multi-service data pipeline framework, “Aspects of the disclosure relate to computer systems that provide effective, efficient, scalable, and convenient ways of securely and uniformly managing how internal computer systems exchange information with external computer systems to provide and/or support different products and services offered by an organization (e.g., a financial institution, and the like)” [Raj ¶ 6]. to represent one or more requests for a first data source “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality” [Raj ¶ 25]. … a first data source identifier “At 310, the parsing engine 210 may parse or otherwise analyze one or more source files 205 containing source code to be translated. The parsing engine 210 may identify a language or a dialect of the language and may generate an input file for the transformation engine 230” [Raj ¶ 43]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. … a second data source “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality. In some cases, the source computing system 122 and/or the destination computing system 123 may be configured to communicate with one or more of the application computing systems 108 such as via direct communications and/or API function calls and the services” [Raj ¶ 25]. “Aspects of the disclosure relate to computer systems that provide effective, efficient, scalable, and convenient ways of securely and uniformly managing how internal computer systems exchange information with external computer systems to provide and/or support different products and services offered by an organization (e.g., a financial institution, and the like)” [Raj ¶ 6 Examiner notes any of the computing systems and services can be considered a second data source]. … a second data source identifier; “At 310, the parsing engine 210 may parse or otherwise analyze one or more source files 205 containing source code to be translated. The parsing engine 210 may identify a language or a dialect of the language and may generate an input file for the transformation engine 230” [Raj ¶ 43]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. “The transpilation system may provide an automated process to move programmed functionalities between technology platforms having disparate programming languages or different dialects of programming languages” [Raj ¶ 44 Examiner notes Raj describes its management process such that it applies to each data source]. read or write data in relation to the first data source based on the one or more requests “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. by: utilizing the first data source identifier for the first data source to select a first connector for the first data source; “At 330, the transformation engine 230 may process the source AST file 214 based on information received from the NLP 222 of the language learning engine 220. For example, the language learning engine 220 may provide, based on an indication of the source language or dialect and an indication of the destination language or dialect, identification of words and/or phrases indicative of a particular language or dialect used for programming a particular application and/or for queries within a particular data repository environment (first data source)” [Raj ¶ 43]. “In an illustrative example, a transpilation mapping (connector) from programming language A to programming language B and a transpilation mapping (connector) from programming language A to programming language C may be learned by the transpilation system” [Raj ¶ 44]. and reading or writing the data in relation to the first data source “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. and read or write additional data in relation to the second data source based on the one or more additional requests “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. by: utilizing the second data source identifier for the second data source to select a second connector for the second data source; “At 330, the transformation engine 230 may process the source AST file 214 based on information received from the NLP 222 of the language learning engine 220. For example, the language learning engine 220 may provide, based on an indication of the source language or dialect and an indication of the destination language or dialect, identification of words and/or phrases indicative of a particular language or dialect used for programming a particular application and/or for queries within a particular data repository environment (second data source)” [Raj ¶ 43]. “In an illustrative example, a transpilation mapping (connector) from programming language A to programming language B and a transpilation mapping (connector) from programming language A to programming language C may be learned by the transpilation system” [Raj ¶ 44]. and reading or writing the additional data in relation to the second data source “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. Raj fails to teach wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework … tagged with a first data source identifier and one or more additional requests for a second data source tagged with a second data source identifier; mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector. However, Bendelac teaches: wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework “An example method includes: receiving, from a client application, a request for data for at least one entity, wherein the request includes a first qualified identifier that includes a first system tenant qualifier and a first local identifier, wherein the first system tenant qualifier identifies a first system tenant in a multi-system tenant landscape, the first local identifier identifies an entity instance of a first entity in the first system tenant, and the request is based on a unified data model that represents commonality of respective data models used by multiple system tenants in the multi-system tenant landscape …” [Bendelac ¶ 3]. tagged with a first data source identifier “The request for data can include one or more qualified identifiers that each include a local identifier and a system tenant prefix (or other type of qualifier) corresponding to a particular data system tenant 505” [Bendelac ¶ 46]. and one or more additional requests for a second data source “The request can be for data for more than entity and multiple sub-requests for entity data can be identified in the request. Determining a routing policy for each sub-request can include determining a target system tenant for each sub-request. The sub-requests can be provided to the respective target system tenants” [Bendelac ¶ 4]. “For instance, the system 200 includes a first data system tenant 202, a second data system tenant 204, and a third data system tenant 206 in a multi-system landscape 208 of a particular organization. An organization may use multiple different systems (and/or system tenants) for managing the organization, with different systems or system tenants providing certain features that may be specialized for different aspects of the organization” [Bendelac ¶ 27]. tagged with a second data source identifier; “The request for data can include one or more qualified identifiers that each include a local identifier and a system tenant prefix (or other type of qualifier) corresponding to a particular data system tenant 505” [Bendelac ¶ 46]. mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system or software product may have been engineered by a different team of engineers, and may have different APIs and/or different data models, even for similar entities” [Bendelac ¶ 30]. mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system tenant 505 can include a data layer 510 that provides access (e.g., using an API 512) to a data repository 514 stored and/or managed by the data system tenant 505” [Bendelac ¶ 44]. Bendelac is considered to be analogous to the claimed invention because it is in the same field of interprogram communication. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raj to incorporate the teachings of Bendelac and include wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework … tagged with a first data source identifier and one or more additional requests for a second data source tagged with a second data source identifier; mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector. Doing so would allow for efficiency in mapping code commands between different data sources through the use of a unified API. “Rather than having each developer and each client application, such as a client application 303, deal with various complexities of accessing data from multiple systems in a multi-system landscape 304, the unified API layer 302 can provide an abstraction layer to client applications to shield them from complexities of understanding differences between multiple systems or tenants …” [Bendelac ¶ 37]. Raj in view of Bendelac fails to explicitly teach mapping the one or more requests from the unified request format to native code commands for the first data source; and reading or writing the data in relation to the first data source utilizing the mapped native code commands … mapping the one or more additional requests from the unified request format to additional native code commands for the second data source, and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. However, Cheng teaches: mapping the one or more requests from the unified request format to native code commands for the first data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. and reading or writing the data in relation to the first data source utilizing the mapped native code commands “In one embodiment, during operation of the resulting client-server application, the native code client application sends or receives data to/from the server … The server application runs a different code such as Java. The client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 35]. mapping the one or more additional requests from the unified request format to additional native code commands for the second data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. Cheng is considered to be analogous to the claimed invention because it is in the same field of interprogram communication. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raj in view of Bendelac to incorporate the teachings of Cheng and include mapping the one or more requests from the unified request format to native code commands for the first data source; and reading or writing the data in relation to the first data source utilizing the mapped native code commands … mapping the one or more additional requests from the unified request format to additional native code commands for the second data source, and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. Doing so would allow for different data sources to operate using their native commands. “The client application is code native to the mobile device (such as assembly code for ARM, MIP, or 80x86 processor) and is executed by the processor of the mobile device. The formats of the application code objects and related messages are represented in the native language to that device (Java, Brew, or MSC#, for example). The server application runs a different code such as Java” [Cheng ¶ 35]. With regard to claim 11, Raj in view of Bendelac in view of Cheng teaches the non-transitory computer-readable medium of claim 10, as referenced above. Raj further teaches: wherein the first data source comprises an input data source “In one or more arrangements, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 (input data source), the destination computing system 123, the client computing system 120, the user computing devices 110, and/or the other devices/systems in the computing environment 100 may be any type of computing device capable of receiving input via a user interface, and communicating the received input to one or more other computing devices in the computing environment 100” [Raj ¶ 30]. and further comprising instructions that, when executed by the at least one processor, cause the computing device to “…memory storing computer-readable instructions that, when executed by the at least one processor, cause the transpilation platform to:” [Raj Claim 1]. read the data from the input data source “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. Raj in view of Bendelac fails to explicitly teach utilizing the native code commands determined from the one or more requests. However, Cheng teaches utilizing the native code commands determined from the one or more requests. “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 12, Raj in view of Bendelac in view of Cheng teaches the non-transitory computer-readable medium of claim 10, as referenced above. Raj further teaches: further comprising instructions that, when executed by the at least one processor, cause the computing device to “…memory storing computer-readable instructions that, when executed by the at least one processor, cause the transpilation platform to:” [Raj Claim 1]. write the data from the first data source to a target data source identified from the data pipeline job configuration. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. “In some cases, the system 200 may be used to perform a method 300 of code migration from a first computing platform (e.g., a source computing platform) to a second computing platform (e.g., a destination computing platform, a target computing platform, or the like)” [Raj ¶ 39]. With regard to claim 13, Raj in view of Bendelac in view of Cheng teaches non-transitory computer-readable medium of claim 10, as referenced above. Raj further teaches: wherein the first data source comprises an input data source “In one or more arrangements, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 (input data source), the destination computing system 123, the client computing system 120, the user computing devices 110, and/or the other devices/systems in the computing environment 100 may be any type of computing device capable of receiving input via a user interface, and communicating the received input to one or more other computing devices in the computing environment 100” [Raj ¶ 30]. Raj fails to explicitly teach and further comprising instructions that, when executed by the at least one processor, cause the computing device to modify the data from the input data source utilizing the native code commands determined from the one or more requests. However, Bendelac teaches and further comprising instructions that, when executed by the at least one processor, cause the computing device to modify the data from the input data source utilizing the native code commands determined from the one or more requests. “The unified API layer 606 can perform various translations when translating the query 604 to the translated query 612. The system tenant prefix in the qualified identifier 610 is generally not needed (nor understood) by the system tenant 611, so the unified API layer 606 can remove (e.g., strip off) the system tenant prefix and only include the local identifier portion of the qualified identifier 610 in the translated query 612 (as illustrated by a local identifier value '10' 616 included in the translated query 612)” [Bendelac ¶ 67]. Raj in view of Bendelac fails to explicitly teach utilizing the native code commands. However, Cheng teaches utilizing the native code commands “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 14, Raj in view of Bendelac in view of Cheng teaches the non-transitory computer-readable medium of claim 10, as referenced above. Raj further teaches: wherein the first data source comprises a target data source “In some cases, the system 200 may be used to perform a method 300 of code migration from a first computing platform (e.g., a source computing platform) to a second computing platform (e.g., a destination computing platform, a target computing platform, or the like). For example, a data repository may be migrated from a source computing platform to a destination computing platform based on a change in database vendors for the enterprise organization, an update to an existing data repository product, an update to a new product version or new product that supports a different programming language, and/or the like” [Raj ¶ 39]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. “The system of claim 1, wherein the instructions cause the transpilation platform to automatically train the ML transpilation model based on documentation describing operations of the source programming language and the target programming language” [Raj Claim 5]. and further comprising instructions that, when executed by the at least one processor, cause the computing device to write the data, identified from the data pipeline job configuration, to the target data source using the native code commands determined from the one or more requests. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. “In some cases, the system 200 may be used to perform a method 300 of code migration from a first computing platform (e.g., a source computing platform) to a second computing platform (e.g., a destination computing platform, a target computing platform, or the like)” [Raj ¶ 39]. Raj in view of Bendelac fails to explicitly teach utilizing the native code commands. However, Cheng teaches utilizing the native code commands “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 15, Raj in view of Bendelac in view of Cheng teaches the non-transitory computer-readable medium of claim 10, as referenced above. Raj further teaches by converting the one or more requests to a programming language recognized by a computer network of the first data source. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. Raj fails to explicitly teach further comprising instructions that, when executed by the at least one processor, cause the computing device to map the one or more requests to the native code commands for the first data source through the first connector. However, Bendelac further teaches further comprising instructions that, when executed by the at least one processor, cause the computing device to map the one or more requests to the native code commands for the first data source through the first connector “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system or software product may have been engineered by a different team of engineers, and may have different APIs and/or different data models, even for similar entities” [Bendelac ¶ 30]. Raj in view of Bendelac fails to explicitly teach map the one or more requests to the native code commands for the first data source. However, Cheng teaches map the one or more requests to the native code commands for the first data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 16, Raj teaches: A system comprising: at least one processor; and at least one non-transitory computer-readable storage medium storing instructions that, when executed by the at least one processor, cause the system to: “…memory storing computer-readable instructions that, when executed by the at least one processor, cause the transpilation platform to:” [Raj Claim 1]. identify, from a data pipeline job configuration, “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality (data pipeline jobs)” [Raj ¶ 25]. a set of instructions for a multi-service data pipeline framework, “Aspects of the disclosure relate to computer systems that provide effective, efficient, scalable, and convenient ways of securely and uniformly managing how internal computer systems exchange information with external computer systems to provide and/or support different products and services offered by an organization (e.g., a financial institution, and the like)” [Raj ¶ 6]. to represent one or more requests for a first data source “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality” [Raj ¶ 25]. … a first data source identifier “At 310, the parsing engine 210 may parse or otherwise analyze one or more source files 205 containing source code to be translated. The parsing engine 210 may identify a language or a dialect of the language and may generate an input file for the transformation engine 230” [Raj ¶ 43]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. … a second data source “In some cases, the application computing systems 108 may host one or more services configured facilitate operations requested through one or more API calls, such as data retrieval and/or initiating processing of specified functionality. In some cases, the source computing system 122 and/or the destination computing system 123 may be configured to communicate with one or more of the application computing systems 108 such as via direct communications and/or API function calls and the services” [Raj ¶ 25]. “Aspects of the disclosure relate to computer systems that provide effective, efficient, scalable, and convenient ways of securely and uniformly managing how internal computer systems exchange information with external computer systems to provide and/or support different products and services offered by an organization (e.g., a financial institution, and the like)” [Raj ¶ 6 Examiner notes any of the computing systems and services can be considered a second data source]. … a second data source identifier; “At 310, the parsing engine 210 may parse or otherwise analyze one or more source files 205 containing source code to be translated. The parsing engine 210 may identify a language or a dialect of the language and may generate an input file for the transformation engine 230” [Raj ¶ 43]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. “The transpilation system may provide an automated process to move programmed functionalities between technology platforms having disparate programming languages or different dialects of programming languages” [Raj ¶ 44 Examiner notes Raj describes its management process such that it applies to each data source]. read or write data in relation to the first data source based on the one or more requests “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. by: utilizing the first data source identifier for the first data source to select a first connector for the first data source; “At 330, the transformation engine 230 may process the source AST file 214 based on information received from the NLP 222 of the language learning engine 220. For example, the language learning engine 220 may provide, based on an indication of the source language or dialect and an indication of the destination language or dialect, identification of words and/or phrases indicative of a particular language or dialect used for programming a particular application and/or for queries within a particular data repository environment (first data source)” [Raj ¶ 43]. “In an illustrative example, a transpilation mapping (connector) from programming language A to programming language B and a transpilation mapping (connector) from programming language A to programming language C may be learned by the transpilation system” [Raj ¶ 44]. and reading or writing the data in relation to the first data source … “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. and read or write additional data in relation to the second data source based on the one or more additional requests “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. by: utilizing the second data source identifier for the second data source to select a second connector for the second data source; “At 330, the transformation engine 230 may process the source AST file 214 based on information received from the NLP 222 of the language learning engine 220. For example, the language learning engine 220 may provide, based on an indication of the source language or dialect and an indication of the destination language or dialect, identification of words and/or phrases indicative of a particular language or dialect used for programming a particular application and/or for queries within a particular data repository environment (second data source)” [Raj ¶ 43]. “In an illustrative example, a transpilation mapping (connector) from programming language A to programming language B and a transpilation mapping (connector) from programming language A to programming language C may be learned by the transpilation system” [Raj ¶ 44]. and reading or writing the additional data in relation to the second data source “In some cases, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 and/or the destination computing system 123 may write data or read data to the database(s) 116 via the services” [Raj ¶ 29]. Raj fails to teach wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework … tagged with a first data source identifier and one or more additional requests for a second data source tagged with a second data source identifier; mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector. However, Bendelac teaches: wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework “An example method includes: receiving, from a client application, a request for data for at least one entity, wherein the request includes a first qualified identifier that includes a first system tenant qualifier and a first local identifier, wherein the first system tenant qualifier identifies a first system tenant in a multi-system tenant landscape, the first local identifier identifies an entity instance of a first entity in the first system tenant, and the request is based on a unified data model that represents commonality of respective data models used by multiple system tenants in the multi-system tenant landscape …” [Bendelac ¶ 3]. tagged with a first data source identifier “The request for data can include one or more qualified identifiers that each include a local identifier and a system tenant prefix (or other type of qualifier) corresponding to a particular data system tenant 505” [Bendelac ¶ 46]. and one or more additional requests for a second data source “The request can be for data for more than entity and multiple sub-requests for entity data can be identified in the request. Determining a routing policy for each sub-request can include determining a target system tenant for each sub-request. The sub-requests can be provided to the respective target system tenants” [Bendelac ¶ 4]. “For instance, the system 200 includes a first data system tenant 202, a second data system tenant 204, and a third data system tenant 206 in a multi-system landscape 208 of a particular organization. An organization may use multiple different systems (and/or system tenants) for managing the organization, with different systems or system tenants providing certain features that may be specialized for different aspects of the organization” [Bendelac ¶ 27]. tagged with a second data source identifier; “The request for data can include one or more qualified identifiers that each include a local identifier and a system tenant prefix (or other type of qualifier) corresponding to a particular data system tenant 505” [Bendelac ¶ 46]. mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system or software product may have been engineered by a different team of engineers, and may have different APIs and/or different data models, even for similar entities” [Bendelac ¶ 30]. mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system tenant 505 can include a data layer 510 that provides access (e.g., using an API 512) to a data repository 514 stored and/or managed by the data system tenant 505” [Bendelac ¶ 44]. Bendelac is considered to be analogous to the claimed invention because it is in the same field of interprogram communication. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raj to incorporate the teachings of Bendelac and include wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework … tagged with a first data source identifier and one or more additional requests for a second data source tagged with a second data source identifier; mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector; mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector. Doing so would allow for efficiency in mapping code commands between different data sources through the use of a unified API. “Rather than having each developer and each client application, such as a client application 303, deal with various complexities of accessing data from multiple systems in a multi-system landscape 304, the unified API layer 302 can provide an abstraction layer to client applications to shield them from complexities of understanding differences between multiple systems or tenants …” [Bendelac ¶ 37]. Raj in view of Bendelac fails to explicitly teach mapping the one or more requests from the unified request format to native code commands for the first data source; and reading or writing the data in relation to the first data source utilizing the mapped native code commands … mapping the one or more additional requests from the unified request format to additional native code commands for the second data source, and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. However, Cheng teaches: mapping the one or more requests from the unified request format to native code commands for the first data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. and reading or writing the data in relation to the first data source utilizing the mapped native code commands “In one embodiment, during operation of the resulting client-server application, the native code client application sends or receives data to/from the server … The server application runs a different code such as Java. The client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 35]. mapping the one or more additional requests from the unified request format to additional native code commands for the second data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. Cheng is considered to be analogous to the claimed invention because it is in the same field of interprogram communication. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raj in view of Bendelac to incorporate the teachings of Cheng and include mapping the one or more requests from the unified request format to native code commands for the first data source; and reading or writing the data in relation to the first data source utilizing the mapped native code commands … mapping the one or more additional requests from the unified request format to additional native code commands for the second data source, and reading or writing the additional data in relation to the second data source utilizing the mapped additional native code commands. Doing so would allow for different data sources to operate using their native commands. “The client application is code native to the mobile device (such as assembly code for ARM, MIP, or 80x86 processor) and is executed by the processor of the mobile device. The formats of the application code objects and related messages are represented in the native language to that device (Java, Brew, or MSC#, for example). The server application runs a different code such as Java” [Cheng ¶ 35]. With regard to claim 17, Raj in view of Bendelac in view of Cheng teaches the system of claim 16, as referenced above. Raj further teaches: wherein the first data source comprises an input data source “In one or more arrangements, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122 (input data source), the destination computing system 123, the client computing system 120, the user computing devices 110, and/or the other devices/systems in the computing environment 100 may be any type of computing device capable of receiving input via a user interface, and communicating the received input to one or more other computing devices in the computing environment 100” [Raj ¶ 30]. and further comprising instructions that, when executed by the at least one processor, cause the system to: “…memory storing computer-readable instructions that, when executed by the at least one processor, cause the transpilation platform to:” [Raj Claim 1]. read the data from the input data source “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. Raj in view of Bendelac fails to explicitly teach utilizing the native code commands determined from the one or more requests. However, Cheng teaches utilizing the native code commands determined from the one or more requests. “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 18, Raj in view of Bendelac in view of Cheng teaches the system of claim 16, as referenced above. Raj further teaches: wherein the first data source comprises a target data source “In some cases, the system 200 may be used to perform a method 300 of code migration from a first computing platform (e.g., a source computing platform) to a second computing platform (e.g., a destination computing platform, a target computing platform, or the like). For example, a data repository may be migrated from a source computing platform to a destination computing platform based on a change in database vendors for the enterprise organization, an update to an existing data repository product, an update to a new product version or new product that supports a different programming language, and/or the like” [Raj ¶ 39]. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. “The system of claim 1, wherein the instructions cause the transpilation platform to automatically train the ML transpilation model based on documentation describing operations of the source programming language and the target programming language” [Raj Claim 5]. and further comprising instructions that, when executed by the at least one processor, cause the system to write the data, identified from the data pipeline job configuration, to the target data source using the native code commands determined from the one or more requests. “In some cases, the source computing system 122, the destination computing system 123 and/or the client computing system 120 may integrate API calls to request data, initiate functionality, or otherwise communicate with the one or more application computing systems 108, such as via the services. For example, the services may be configured to facilitate data communications (e.g., data gathering functions, data writing functions, and the like) between the source computing system 122, the destination computing system 123 and/or the client computing system 120 and the one or more application computing systems 108” [Raj ¶ 27]. “In some cases, the system 200 may be used to perform a method 300 of code migration from a first computing platform (e.g., a source computing platform) to a second computing platform (e.g., a destination computing platform, a target computing platform, or the like)” [Raj ¶ 39]. Raj in view of Bendelac fails to explicitly teach utilizing the native code commands. However, Cheng teaches utilizing the native code commands “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 19, Raj in view of Bendelac in view of Cheng teaches the system of claim 16, as referenced above. Raj further teaches by converting the one or more requests to a programming language recognized by a computer network of the first data source. “At 340, the language learning engine 220 may apply the source AST 214 to the ML model to identify a source language, identify terms and operators within the AST (e.g., key terms in the vocabulary and grammar) of the identified language and output mappings in the specified target language for each identified term” [Raj ¶ 49]. Raj fails to explicitly teach further comprising instructions that, when executed by the at least one processor, cause the system to map the one or more requests to the native code commands for the first data source through the first connector. However, Bendelac further teaches further comprising instructions that, when executed by the at least one processor, cause the system to map the one or more requests to the native code commands for the first data source through the first connector “Before providing the request to the target system, the request can be translated from the unified model into a target data model of the target system tenant” [Bendelac ¶ 4]. “Each data system or software product may have been engineered by a different team of engineers, and may have different APIs and/or different data models, even for similar entities” [Bendelac ¶ 30]. Raj in view of Bendelac fails to explicitly teach map the one or more requests to the native code commands for the first data source. However, Cheng teaches map the one or more requests to the native code commands for the first data source “During operation of the resulting client-server application, the native code client application sends or receives data to/from the server, wherein the client application is code native to the mobile device and the server application is a different code (e.g. Java), and wherein the client application translates the data to a language neutral format, then encapsulates the data into a message, then sends message to server which then translates the data to its native language” [Cheng ¶ 16]. With regard to claim 20, Raj in view of Bendelac in view of Cheng teaches the system of claim 16, as referenced above. Raj further teaches wherein: the one or more requests comprise one or more graphical user interface selectable options. “In one or more arrangements, the hybrid feedback driven transpilation system 104, the application computing systems 108, the source computing system 122, the destination computing system 123, the client computing system 120, the user computing devices 110, and/or the other devices/systems in the computing environment 100 may be any type of computing device capable of receiving input via a user interface, and communicating the received input to one or more other computing devices in the computing environment 100” [Raj ¶ 30]. “As used throughout this disclosure, computer-executable "software and data" can include one or more: algorithms, applications, application program interfaces (APIs), attachments, big data, daemons, emails, encryptions, databases, datasets, drivers, data structures, file systems or distributed file systems, firmware, graphical user interfaces, images, instructions, machine learning (e.g., supervised, semi-supervised, reinforcement, and unsupervised), middleware, modules, objects, operating systems, processes, protocols, programs, scripts, tools, and utilities” [Raj ¶ 17]. Raj fails to explicitly teach one or more graphical user interface selectable options. However, Bendelac teaches one or more graphical user interface selectable options. “The GUI 554 may comprise a plurality of customizable frames or views having interactive fields, pull-down lists, and buttons operated by the user. The GUI 554 contemplates any suitable graphical user interface, such as a combination of a generic web browser, intelligent engine, and command line interface (CLI) that processes information and efficiently presents the results to the user visually” [Bendelac ¶ 60]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Raj (US 2024/0160422 A1) in view of Bendelac (US 2022/0269552 A1) in view of Cheng (US 2008/0016504 A1) in view of Werner (US 2024/0061944 A1). With regard to claim 6, Raj in view of Bendelac in view of Cheng teaches the computer-implemented method of claim 1, as referenced above. Raj in view of Bendelac in view of Cheng fails to explicitly teach wherein the data pipeline job configuration comprises one or more tags for the first connector of the first data source, scheduling settings, monitoring requests, alerting requests, watermarking requests, access permission settings, or output file identifiers. However, Werner teaches wherein the data pipeline job configuration comprises one or more tags for the first connector of the first data source, scheduling settings, monitoring requests, alerting requests, watermarking requests, access permission settings, or output file identifiers. “In one embodiment, the content analysis and blocking module 214 may compare the protected content tags 223 (e.g., tags extracted from the protected content 224) and the current task tags 238b (e.g., tags extracted from the current projects of the first user) to determine if there is a conflict (e.g., overlapping similarity) that may warrant blocking the protected content 224 from displaying on the developer device 204” [Werner ¶ 45]. Werner is considered to be analogous to the claimed invention because it is in the same field of digital task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raj in view of Bendelac in view of Cheng to incorporate the teachings of Werner and include that the data pipeline job configuration comprises one or more tags for the first connector of the first data source, scheduling settings, monitoring requests, alerting requests, watermarking requests, access permission settings, or output file identifiers. Doing so would allow for monitoring and added security for protected content within the application. “If the content contains protected source code, the computing device may receive a warning prior to the content being displayed and if the content is displayed on the computing device, the content viewing activity may be tracked and stored in a user account associated with the computing device” [Werner ¶ 13]. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Raj (US 2024/0160422 A1) in view of Bendelac (US 2022/0269552 A1) in view of Cheng (US 2008/0016504 A1) in view of Taine (US 2018/0144775 A1). With regard to claim 7, Raj in view of Bendelac in view of Cheng teaches the computer-implemented method of claim 1, as referenced above. Raj fails to teach wherein the first data source identifier for a data source indicates a selection or name of the first data source. However, Bendelac teaches wherein the first data source identifier for a data source indicates a selection or name of the first data source “The query 604 includes a qualified identifier 610. In general, a qualified identifier can be a global identifier that is globally unique within a given entity type that is used with a unified model 610 used by the client application 602 when interfacing with the unified API layer 606. A qualified identifier includes a local identifier (e.g., "10" in the qualified identifier 610) that indicates a local identifier provided by a particular system tenant. A system tenant prefix (e.g., "Sys1" in the qualified identifier 610) indicates which system tenant provided the local identifier” [Bendelac ¶ 64]. Raj in view of Bendelac in view of Cheng fails to explicitly teach and further comprising identifying a data source request type, wherein the data source request type comprises an input request or an output request. However, Taine teaches wherein the identifier for a data source indicates a selection or name of the data source and further comprising identifying a data source request type, wherein the data source request type comprises an input request or an output request. “At block 452, the system may receive an input. The input may be, for example, the type of input received at block 402 in FIG. 4A. The input may include a request to perform an action or generate an output (output request), which may be identified (for example) by a request-ID or request type. Accordingly, the system may determine (e.g., based on the request-ID or request type) a type of request received” [Taine ¶ 98]. Taine is considered to be analogous to the claimed invention because it is in the same field of digital task scheduling strategies. Therefore, it would be obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Raj in view of Bendelac in view of Cheng to incorporate the teachings of Taine and include further comprising identifying a data source request type, wherein the data source request type comprises an input request or an output request. Doing so would allow for improved request evaluation and support for requests of different types. “Returning to block 404, if the decision at this block is "NO" (i.e., the input was not a media effect application), then processing may proceed to block 420 and it may be determined if the input was a request for information or action relating to the media effect index” [Taine ¶ 95]. Response to Arguments Applicant's arguments filed 10/23/2025 have been fully considered but they are not persuasive. Applicant argues in substance: I. The currently amended independent claims recite an improvement to computer capabilities or to a technological field (e.g., network communications with data sources) such that the currently amended independent claims are patent-eligible in light of the practical application analysis and the improvements consideration outlined in the Reminders Memo. In particular, the currently amended independent claims more particularly recite a practical application that dynamically executes data source agnostic data pipeline job configurations that can easily, flexibly, and efficiently interact with a variety of data sources having different native code commands while utilizing a unified request format. Therefore, the currently amended independent claims, as a whole, integrate an alleged judicial exception into a practical application that recites an improvement to computer capabilities or to a technological field as discussed in the MPEP Guidance and the Reminders Memo. For at least these reasons, currently amended independent claims 1, 10, and 16, and their dependent claims are directed to patent-eligible subject matter, and Applicant respectfully requests withdrawal of the rejections under 35 U.S.C. § 101. a) Examiner respectfully disagrees. The independent claims include abstract idea recitations of ‘selecting’ and ‘mapping’. The additional elements of the claims fail to integrate these abstract ideas into a practical application. As detailed in the rejection above, these additional elements of the independent claims amount to no more than using generic computing components as a tool to apply the abstract idea and insignificant extra solution activity. When considering individual limitations and the claims as a whole, the recited judicial exceptions of the independent claims are not integrated into a practical application. Further, the claim language does not reflect the referenced improvements to “computer capabilities” or “network communications with data sources”. It is unclear what specific improvement is being argued in applicants’ remarks. Although the claims utilize computer resources and involve data transmission, there have not been improvements shown through the claims to either field. Thus, claims 1, 10, and 16, and their dependent claims are not directed to patent-eligible subject matter under 35 U.S.C. § 101. II. Despite Raj describing transpiling of a source language into a target language, Raj fails to teach or suggest at least "identifying, from a data pipeline job configuration, a set of instructions for a multi-service data pipeline framework, wherein the set of instructions utilizes a unified request format corresponding to the multi-service data pipeline framework to represent one or more requests for a first data source tagged with a first data source identifier and one or more additional requests for a second data source tagged with a second data source identifier," "reading or writing data in relation to the first data source based on the one or more requests by... mapping the one or more requests from the unified request format to native code commands for the first data source through the first connector," and "reading or writing additional data in relation to the second data source based on the one or more additional requests by...mapping the one or more additional requests from the unified request format to additional native code commands for the second data source through the second connector," as recited by currently amended independent claims 1, 10, and 16. a) Examiner respectfully disagrees. Raj teaches: identifying, from a data pipeline job configuration, [Raj ¶ 25] a set of instructions for a multi-service data pipeline framework, [Raj ¶ 6] to represent one or more requests for a first data source [Raj ¶ 25] reading or writing data in relation to the first data source based on the one or more requests [Raj ¶ 29] reading or writing additional data in relation to the second data source based on the one or more additional requests [Raj ¶ 27]. Applicant’s further arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Examiner respectfully requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application. When responding to this Office Action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 CFR 1.111(c). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARI F RIGGINS whose telephone number is (571)272-2772. The examiner can normally be reached Monday-Friday 7:00AM-4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at (571) 272-3338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.F.R./Examiner, Art Unit 2197 /BRADLEY A TEETS/Supervisory Patent Examiner, Art Unit 2197
Read full office action

Prosecution Timeline

Nov 22, 2022
Application Filed
Jun 23, 2025
Non-Final Rejection — §101, §103
Sep 09, 2025
Interview Requested
Oct 06, 2025
Applicant Interview (Telephonic)
Oct 06, 2025
Examiner Interview Summary
Oct 23, 2025
Response Filed
Jan 22, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month