Prosecution Insights
Last updated: April 19, 2026
Application No. 18/470,724

DATA PIPELINE ORCHESTRATION FOR DATA-DRIVEN ENGINEERING

Final Rejection §103
Filed
Sep 20, 2023
Examiner
CHEEMA, UMAR
Art Unit
2458
Tech Center
2400 — Computer Networks
Assignee
SAP SE
OA Round
4 (Final)
66%
Grant Probability
Favorable
5-6
OA Rounds
5y 4m
To Grant
74%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
154 granted / 235 resolved
+7.5% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
44 currently pending
Career history
279
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
14.4%
-25.6% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This office action is in response to amendment/reconsideration filed 6/30/2025, the amendment/reconsideration has been considered. Claims 1-3, 6-10, 13-17 and 19-20 are pending for examination. Response to Arguments Applicant's arguments are moot in light of the new ground of rejections set forth below. Claim Rejections - 35 USC § 103 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 6. Claims 1-3, 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Rahill-Marier et al (US 2022/0236965, hereafter Rahill), in view of Alfaras et al (US 2024/0281419), and further in view of Lowson (US 2008/0104039). As to claim 1, Rahill discloses a system associated with data pipeline orchestration, comprising: a data pipeline data store containing, for each of a plurality of data pipelines, a series of data pipeline steps associated with a data pipeline use case (figure 1, “Products, Applications, Use Cases Data Store”; [0168], “FIG. 7H shows another tab 760 of the user interface 742 in which that various applications 762 associated with the selected use case are listed, and by which the user may select to deploy one or more of the applications (which applications would then be populated with data as defined by the connected data sources and which applications would operate according to the configuration of functions and actions provided by the user). The user can deploy by saving the generated use case workflow/application”; [0138], “The system can include a catalog of use cases and/or applications that allow the user to traverse the system platform (of numerous workflows, data sources, data analyses, software tools and functions, dashboards, interactive software and user interfaces, and/or the like), to rapidly achieve specific outcomes and deliver user-facing workflows”. Here, the workflow/applications indicates a series of data pipeline steps), wherein the series of data pipeline steps include: automatically downloading raw data from an internal enterprise data source; extract, automatically performing transform, load tasks; and automatically storing information in a cloud-based data warehouse (it is to be noted that since the claim does not limit the relationship between the recited raw data, data that the ETL tasks performs on, and data involved in the performed data cleanup, and the information stored in a cloud-based warehouse, Examiner assumes any relationship between the downloaded raw data, the tasks, and the information. See figure 1, “Products, Applications, Use Cases Data Store”; [0168], “FIG. 7H shows another tab 760 of the user interface 742 in which that various applications 762 associated with the selected use case are listed, and by which the user may select to deploy one or more of the applications (which applications would then be populated with data as defined by the connected data sources and which applications would operate according to the configuration of functions and actions provided by the user. The user can deploy by saving the generated use case workflow/application”; Figure 10A, step 1014, “Generate interactive graphic user interfaces for populated applications”; [0177], “at block 1014, the system can generate various interactive graphical user interfaces for the populated use cases/applications (e.g., as shown in FIG. 9B)”; [0174], “deployed application 912 for supply chain management is shown in FIG. 9B”; see Fig. 9B, wherein automatically downloading raw data from an internal enterprise data source is implied in order to gather the original supply chain data to be displayed (see [0068], “The physical system data store 106 may store and provide to the network 110, and/or to the system server 104, various data items/objects that may be related to logical computations, measuring devices, physical subsystems, technical subsystems, logical subsystems, physical systems, technical systems, and logical systems. For example, such data items may include a statistic related to a product or item that is the subject of a supply chain”), and wherein automatically performing extract transform, load the display tasks is also implied by Fig. 9B. See [0068], “the physical system data store 106 may additionally or alternatively include data or information derivable from or based on physical objects or events”; [0050], “cloud storage”), automatically performing data cleanup ([0114], “For example, old dataset items may be deleted from the system to conserve data storage space”); and a data pipeline orchestration server, coupled to the data pipeline data store, including: a computer processor, and a computer memory storage device, coupled to the computer processor, that contains instructions that when executed by the computer processor enable the data pipeline orchestration server to: (i) receive, from a data engineering operator, a selection of a data pipeline use case in the data pipeline data store figure 10C, “Receive input selecting a group of use cases, a use case, and/or an application” wherein the user that selects a use case to continue the configuration of the use case and/or applications as stated in step 1036 can be considered a data engineering operator; see also figure 6A, “Engineering Change Note”; [0127], “development branch”); (ii) receive first configuration information for the selected data pipeline use case (abstract, “receiving a first user input indicating an association between a first data source and a first data object type; based on the compatibilities and the indicated association, automatically populating each of the one or more applications that is compatible with the first data object type with data from the first data source, wherein populating includes generating interactive graphical user interfaces”; see figure 11, “Optionally further analyze based on user inputs…Determine compatibilities between data object types and user cases… Determine suggestions of applications that are compatible with data sources and/or data object types”; [0162], “in response to a user selecting to configure a use case and/or application (e.g., by selection of a use case and/or application…, the use can connect or associate data sources with various data object types”; [0164], “the user has selected a particular data source 722 to connect or associate with the data object type of the use case”). (iii) receive second configuration information, different than the first configuration information, for the selected data pipeline use case ([0164], “the user has selected a particular data source 722 to connect or associate with the data object type of the use case… the user may…optionally review and/or modify connections or associations among particular properties of the data object types and data sources”), (iv) store representations of both the first configuration information and the second configuration information in connection with the selected data pipeline use case ([0164], “the user has selected a particular data source 722 to connect or associate with the data object type of the use case… the user may…optionally review and/or modify connections or associations among particular properties of the data object types and data sources… upon connection of a data source to a data object type of a use case, data objects from the data source may be automatically populated in applications of the use case such that a user may then interact with the applications”, indicating that representations of both the first configuration information and the second/modified configuration information are stored), and (v) arrange for execution of the selected pipeline in accordance with one of the first configuration information and the second configuration information ([0164], “the user has selected a particular data source 722 to connect or associate with the data object type of the use case… the user may…optionally review and/or modify connections or associations among particular properties of the data object types and data sources… upon connection of a data source to a data object type of a use case, data objects from the data source may be automatically populated in applications of the use case such that a user may then interact with the applications”). wherein a data pipeline use case associated with one data engineering team of an enterprise is shared with another data engineering team of the enterprise via a platform and cloud-based service for software development and version control ([0171], “Deploying” a use case and/or application can cause the system to make that use case/application available to other users of the system (e.g., as pre-defined and/or based on input from the user initiating the deploying). In various implementations, when “deploying” a use case and/or application, the user may select to effectively “save” a copy of the now configured and hooked up use case and/or application. The use case and/or application can then optionally be further modified and/or shared with other users.)”, wherein a user can be considered a solo developer/engineering team, and wherein sharing with other users a version and then a further modified version is a type of version control using the platform and cloud-based service, see [0257], “cloud computing environment”. It is to be noted that the claim does not require a specific type of version control, nor does the claim require that said another data engineering team share back with said one data engineering team). Rahill does not expressly disclose that the data pipelines are Cross Industry Standard Processor for Data Mining (“CRISP_DM”), or that the extract, transform, load tasks are Extract, Transform, Load (“ETL”) tasks, or that the first and second configuration information defining configuration parameters for all of: downloading the raw data from the internal enterprise data source, performing ETL tasks, performing data cleanup, storing information in the cloud-based data warehouse. Alfaras discloses Extract, Transform, Load (“ETL”) tasks, and configuration information defines configuration parameters for all of: downloading raw data from an internal enterprise data source, performing ETL tasks, performing data cleanup, storing information in a cloud-based data warehouse ([0048], “Data pipelines typically involve several stages, including data extraction, transformation, loading (ETL), or ingestion (ELT), depending on the specific requirements of the use case and the technologies involved. In the extraction stage, data is retrieved from the source systems using various methods, such as batch processing, change data capture (CDC), or real-time streaming. Once extracted, the data may undergo transformations to clean, enrich, or standardize it, ensuring consistency and quality before it is loaded into the target system.”; [0049], “The loading or ingestion stage may involve delivering the transformed data to its destination, where it can be stored, processed, and analyzed. This stage often may involve considerations such as data partitioning, schema evolution, and data governance to ensure that the data is structured and organized appropriately for downstream consumption. Depending on the requirements of the use case, data pipelines may support batch processing, streaming processing, or a combination of both to accommodate different latency and throughput requirements”; [0141], “WhereScape simplifies the creation of ETL (Extract, Transform, Load) workflows by automating the generation of SQL code for data loading tasks. It also provides capabilities for metadata management, version control, and scheduling, enabling organizations to streamline their data pipeline workflows. By integrating Python scripts with WhereScape, organizations can leverage the strengths of both technologies to build robust, scalable, and automated data pipelines. Python handles data transformation and integration tasks, while WhereScape orchestrates the loading of data into target destinations, such as data warehouses, data marts, or analytical databases.”; [0188], “Once the data pipeline design is complete, WhereScape RED generates the necessary code, scripts, and configurations to implement the pipeline automatically. This includes generating SQL code for data extraction, transformation, and loading (ETL) processes, as well as orchestrating workflow tasks, scheduling jobs, and managing dependencies. WhereScape RED abstracts away the complexities of coding and scripting, allowing users to focus on business logic and requirements rather than technical implementation details.” Here, the generated code includes configuration parameters for all the workflow tasks, wherein the workflow tasks include ETL tasks and other tasks such as downloading/obtaining data, cleanup, storing, as disclosed above in [0048]-[0049]). Before the filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Rahill and Alfaras. The suggestion/motivation of the combination would have been to automate the workflow tasks (Alfaras, [0188]). Lowson discloses a concept of data pipelines being Cross Industry Standard Processor for Data Mining (“CRISP_DM”) ([0029], “deploy open standards, such as an XML-enabled relational database management system (RDBMS) to store the cleansed data, an online analytical processing (OLAP) database and OLAP engine to enable multidimensional data analysis, and data transformation software to extract, transform, and load data between systems”; [0108], “industry standard methodologies for data mining and data analysis (such as CRISP-DM, the Cross Industry Standard Process model for Data Mining”). Before the filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Rahill-Alfaras and Lowson. The suggestion/motivation of the combination would have been to utilize industry standard methodologies (Lowson, [0108]). As to claim 2, Rahill-Alfaras-Lowson discloses the system of claim 1, wherein at least one of the series of data pipeline steps further comprises all of: data cleanup; data processing; deployment of a structure; and data uploading (Rahill, [0167], last 4 lines, “the end user to have the option to take in the deployed workflows/application… end user can cancel an order…”, wherein cancelling an order is a type of data cleanup and also data processing. See Fig. 9B, each display component/structure indicates deployment of a structure, and the displayed data are uploaded). As to claim 3, Rahill-Alfaras-Lowson discloses the system of claim 1, wherein the first configuration information includes information associated with all of: (i) credentials, (ii) data sources, and (iii) configuration of further calculations (Rahill, abstract, “receiving a first user input indicating an association between a first data source and a first data object type; based on the compatibilities and the indicated association, automatically populating each of the one or more applications that is compatible with the first data object type with data from the first data source, wherein populating includes generating interactive graphical user interfaces”; see figure 11, “Optionally further analyze based on user inputs…Determine compatibilities between data object types and user cases… Determine suggestions of applications that are compatible with data sources and/or data object types”; [0162], “in response to a user selecting to configure a use case and/or application (e.g., by selection of a use case and/or application…, the use can connect or associate data sources with various data object types”; [0164], “the user has selected a particular data source 722 to connect or associate with the data object type of the use case”. Here, the data sources and.or data object types reads on claimed data sources”, and determining compatibilities to determine suggestions of applications reads on the claimed configuration of further calculations. See (0136] “The system can include various permissioning functionalities. For example, the system can implement access control lists and/or other permissioning functionality that can enable highly granular permissioning of data assets (e.g., files, data items, datasets, portions of datasets, transformations, and/or the like). The permissioning may include, for example, specific permissions for read/write/modify, and/or the like, which may be applicable to specific users, groups of users, roles, and/or the like”; and [0171], “Permissioning can also be applied to the deployed use case and/or application such that only users with the right permission can access the use case and/or application, and/or certain data within the use case and/or application”, wherein said permissioning based on a specific user indicates credential for the specific user such as a user name/identifier). As to claim 6, Rahill-Alfaras-Lowson discloses the system of claim 1, wherein the data pipeline use case is deployed to all of: (i) a development system (Bahill, [0127], “development branch”), (ii) a test system (Bahill, Fig. 6C, “Root Cause Analysis… rapid hypothesis testing and flexible analysis tolling…”), and (iii) a production system (Bahill, figure 7E, “Supply Chain… PLANT…”; Fig. 7H, “Supply chain control tower”, “Production control tower”). As to claim 7, Rahill-Alfaras-Lowson discloses the system of claim 1, wherein execution of the selected pipeline is further performed in accordance with data pipeline scheduler information (Rahill, [0127], “Build branches provide isolation of re-computation of graph data across different users and across different execution schedules of a data pipeline.”). 7. Claims 8-10 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Rahill-Alfaras-Lowson, as applied to claim 1 above, and further in view of Li et al (US 20160246838 ). As to claim 8, see similar rejection to claim 1. For “Python code”, see citation in rejection to claim 1, e.g., [0141], “WhereScape simplifies the creation of ETL (Extract, Transform, Load) workflows by automating the generation of SQL code for data loading tasks. It also provides capabilities for metadata management, version control, and scheduling, enabling organizations to streamline their data pipeline workflows. By integrating Python scripts with WhereScape, organizations can leverage the strengths of both technologies to build robust, scalable, and automated data pipelines. Python handles data transformation and integration tasks, while WhereScape orchestrates the loading of data into target destinations, such as data warehouses, data marts, or analytical databases.” However, Rahill-Alfaras-Lowson does not expressly disclose “Jenkins” data. Li discloses “Jenkins” data ([0047]). Before the filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Rahill-Alfaras-Lowson and Li. The suggestion/motivation of the combination would have been to utilize CL/CI tolols (Li, [0047]). As to claim 9, see citation in rejection to claim 2. As to claim 10, see citation in rejection to claim 3. As to claim 13, see citation in rejection to claim 6. As to claim 14, see citation in rejection to claim 7. 8. Claims 15-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rahill-Alfaras-Lowson, as applied to claim 1 above, and further in view of Yuan (US 2018/0307692). As to claim 15, see similar rejection to claim 1. However, Rahill-Alfaras-Lowson does not expressly disclose Open data (OData). Yuan discloses a concept for configuration to use Odata version 2 or OData version 4 ([0081]-[0082]). Before the filing date of the invention, it would have been obvious for an ordinary skilled in the art to combine Rahill-Alfaras-Lowson and Yuan. The suggestion/motivation of the combination would have been to utilize known open data protocols (Yuan, [0081]-[0082]). As to claim 16, see citation in rejection to claim 2. As to claim 17, see citation in rejection to claim 3. As to claim 19, see citation in rejection to claim 6. As to claim 20, see citation in rejection to claim 7. Conclusion 9. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUA FAN whose telephone number is (571)270-5311. The examiner can normally be reached on 9-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Umar Cheema can be reached at 571-270-3037. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUA FAN/ Primary Examiner, Art Unit 2458
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Apr 21, 2024
Non-Final Rejection — §103
Jul 19, 2024
Response Filed
Oct 05, 2024
Final Rejection — §103
Jan 10, 2025
Request for Continued Examination
Jan 22, 2025
Response after Non-Final Action
Mar 25, 2025
Non-Final Rejection — §103
Jun 30, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598113
APPLYING MANAGEMENT CONSTRAINTS DURING NETWORK SLICE DESIGN
2y 5m to grant Granted Apr 07, 2026
Patent 12585234
METHOD FOR ASSOCIATING ACTIONS FOR INTERNET OF THINGS, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12574801
OPEN RADIO ACCESS NETWORK CLOUD INTELLIGENT CONTROLLER
2y 5m to grant Granted Mar 10, 2026
Patent 12568521
SCHEDULING TRANSMISSION METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12501491
RACH BASED ON FMCW CHANNEL SOUNDING
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
66%
Grant Probability
74%
With Interview (+8.4%)
5y 4m
Median Time to Grant
High
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month