DETAILED ACTION
Applicant’s Amendment filed on January 2, 2026 has been reviewed.
Claims 1, 8, 9 and 15 are amended in the amendment.
Claims 1-20 have been examined.
Continued Examination under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 2, 2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5, 8, 12, 15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ezrielev et al. (US 2025/0004615 A1), hereinafter referred to as Ezrielev, in view of Rodgers et al. (US 2007/0177581 A1), hereinafter referred to as Rodgers.
With respect to claim 1, Ezrielev teaches A system comprising:
a plurality of application computing systems comprising a first application computing system providing a first application and a second application computing system comprising a second application (application 240 is connected to application 242 via API 230; these representations indicate that data from either application may be provided to the other application via API 230, para. 0072);
a pipeline management tool platform, comprising:
a processor (processor, para. 0026); and
memory storing computer-readable instructions that, when executed by the processor (the non-transitory media and a processor, and perform the computer-implemented method when the computer instructions are executed by the processor, para. 0026), cause the pipeline management tool platform to:
receive, from the first application, a plurality of data messages (application 240 is connected to application 242 via API 230; these representations indicate that data from either application may be provided to the other application via API 230, para. 0072);
route the plurality of data messages to the second application via a data pipeline (flowing of data through the data pipeline and the digital twin allow the system to compare performance of the data pipeline to a simulated performance, para. 0043);
monitor, in real-time, operation of the data pipeline (the data pipeline may continuously monitor operation of the data pipeline to identify misalignments, para. 0075; the system continue to monitor operation of the data pipeline via generation of a second live performance report and a second simulated performance report, para. 0063);
Ezrielev does not explicitly teach
limiting, based on a identified data pipeline utilization approaching a threshold, communications on the data pipeline to slow an approach towards the threshold;
route, based on a data pipeline utilization rate meeting a threshold, at least a portion of the plurality of data messages to a data repository; and
push, data message of the at least the portion of the plurality of data messages to the data pipeline when the data pipeline utilization rate falls below the threshold.
However, Rodgers teaches
limiting, based on a identified data pipeline utilization approaching a threshold, communications on the data pipeline to slow an approach towards the threshold (monitoring whether the available data buffer capacity has reached a certain utilization level, the process continues at step 420 that the data pipeline station generates a feedback hold signal; the one or more station hold signals used to regulate transmission of data packets upstream from the data pipeline station, by way of inhibiting the transmission of additional data packets into the data pipeline station, para. 0018);
route, based on a data pipeline utilization rate meeting a threshold, at least a portion of the plurality of data messages to a data repository (when data transmission is halted by the second switching circuitry 120, any data that is held up stored in the data buffer 118, para. 0013; fig. 1); and
push, data message of the at least the portion of the plurality of data messages to the data pipeline when the data pipeline utilization rate falls below the threshold (monitoring whether the available data buffer capacity has reached a certain utilization level, the process continues at step 420 that the data pipeline station generates a feedback hold signal; the one or more station hold signals used to regulate transmission of data packets upstream from the data pipeline station, by way of inhibiting the transmission of additional data packets into the data pipeline station, para. 0018; otherwise the process reverts back to step 412 that the data packet is processed by the data pipeline station such as the one or more station hold signals are terminated such that data transmission from upstream data pipeline stations resume, para. 0018) in order to facilitate transmission through the data pipeline as taught by Rodgers (para. 0013).
Therefore, based on Ezrielev in view of Rodgers, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Rodgers to the system of Ezrielev in order to facilitate transmission through the data pipeline as taught by Rodgers (para. 0013).
With respect to claim 5, Ezrielev teaches The system of claim 1, wherein the instructions further cause the pipeline management tool platform to:
cause presentation of a user interface screen at a user device, wherein the user interface screen presents a representation of data pipeline utilization information (the system continue to monitor operation of the data pipeline via generation of a second live performance report and a second simulated performance report, para. 0063; graphical user interfaces may be used to (i) enhance stakeholder understanding of operation of the data pipeline, (ii) share information regarding interests in the data pipeline, (iii) make collective decisions, and/or otherwise facilitate management of the data pipelines, para. 0067); and
receive, based on the data pipeline utilization information, rules corresponding to message priorities assigned based on message owner information (performing processes to sort, organize, format, and/or otherwise prepare the data for future processing in the data pipeline, and/or may provide the data to other data processing systems in the data pipeline, para. 0030).
With respect to claim 8, Ezrielev teaches A method comprising:
receiving, from a first application, a plurality of data messages (application 240 is connected to application 242 via API 230; these representations indicate that data from either application may be provided to the other application via API 230, para. 0072);
routing the plurality of data messages to a second application via a data pipeline (flowing of data through the data pipeline and the digital twin allow the system to compare performance of the data pipeline to a simulated performance, para. 0043);
monitoring, in real-time, operation of the data pipeline (the data pipeline may continuously monitor operation of the data pipeline to identify misalignments, para. 0075; the system continue to monitor operation of the data pipeline via generation of a second live performance report and a second simulated performance report, para. 0063);
Ezrielev does not explicitly teach
monitoring operation of the data pipeline to identify a communication rate and a memory utilization parameter;
routing, based on a data pipeline utilization rate meeting a configurable threshold, at least a portion of the plurality of data messages to a data repository; and
pushing, data message of the at least the portion of the plurality of data messages to the data pipeline when the data pipeline utilization rate falls below the configurable threshold.
However, Rodgers teaches
monitoring operation of the data pipeline to identify a communication rate and a memory utilization parameter (monitoring processing conditions at the various processing points or processing stations along the data processing pipeline; depending on a processing station's requirements, the processing rate or throughput vary or change along the data processing pipeline to maximize data flow rate, para. 0011 and monitoring whether the available data buffer capacity has reached a certain utilization level, para. 0018);
routing, based on a data pipeline utilization rate meeting a configurable threshold, at least a portion of the plurality of data messages to a data repository (when data transmission is halted by the second switching circuitry 120, any data that is held up stored in the data buffer 118, para. 0013; fig. 1); and
pushing, data message of the at least the portion of the plurality of data messages to the data pipeline when the data pipeline utilization rate falls below the configurable threshold (monitoring whether the available data buffer capacity has reached a certain utilization level, the process continues at step 420 that the data pipeline station generates a feedback hold signal; the one or more station hold signals used to regulate transmission of data packets upstream from the data pipeline station, by way of inhibiting the transmission of additional data packets into the data pipeline station, para. 0018; otherwise the process reverts back to step 412 that the data packet is processed by the data pipeline station such as the one or more station hold signals are terminated such that data transmission from upstream data pipeline stations resume, para. 0018) in order to facilitate transmission through the data pipeline as taught by Rodgers (para. 0013).
Therefore, based on Ezrielev in view of Rodgers, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Rodgers to the method of Ezrielev in order to facilitate transmission through the data pipeline as taught by Rodgers (para. 0013).
With respect to claim 12, Ezrielev teaches The method of claim 8, further comprising:
causing presentation of a user interface screen at a user device, wherein the user interface screen presents a representation of data pipeline utilization information (the system continue to monitor operation of the data pipeline via generation of a second live performance report and a second simulated performance report, para. 0063; graphical user interfaces may be used to (i) enhance stakeholder understanding of operation of the data pipeline, (ii) share information regarding interests in the data pipeline, (iii) make collective decisions, and/or otherwise facilitate management of the data pipelines, para. 0067); and
receiving, based on the data pipeline utilization information, rules corresponding to message priorities assigned based on message owner information (performing processes to sort, organize, format, and/or otherwise prepare the data for future processing in the data pipeline, and/or may provide the data to other data processing systems in the data pipeline, para. 0030).
With respect to claim 15, Ezrielev teaches Non-transitory computer readable media storing instructions that (the non-transitory media and a processor, and perform the computer-implemented method when the computer instructions are executed by the processor, para. 0026), when executed by a processor (processor, para. 0026), cause a computing platform to:
receive, from a first application, a plurality of data messages (application 240 is connected to application 242 via API 230; these representations indicate that data from either application may be provided to the other application via API 230, para. 0072);
route the plurality of data messages to a second application via a data pipeline (flowing of data through the data pipeline and the digital twin allow the system to compare performance of the data pipeline to a simulated performance, para. 0043);
monitor, in real-time, operation of the data pipeline (the data pipeline may continuously monitor operation of the data pipeline to identify misalignments, para. 0075; the system continue to monitor operation of the data pipeline via generation of a second live performance report and a second simulated performance report, para. 0063);
Ezrielev does not explicitly teach
route, based on a data pipeline utilization rate meeting a threshold, at least a portion of the plurality of data messages to a data repository;
push, data message of the at least the portion of the plurality of data messages to the data pipeline when the data pipeline utilization rate falls below the threshold.
However, Rodgers teaches
route, based on a data pipeline utilization rate meeting a threshold, at least a portion of the plurality of data messages to a data repository (monitoring whether the available data buffer capacity has reached a certain utilization level, the process continues at step 420 that the data pipeline station generates a feedback hold signal; the one or more station hold signals used to regulate transmission of data packets upstream from the data pipeline station, by way of inhibiting the transmission of additional data packets into the data pipeline station, para. 0018; when data transmission is halted by the second switching circuitry 120, any data that is held up stored in the data buffer 118, para. 0013; fig. 1); and
push, data message of the at least the portion of the plurality of data messages to the data pipeline when the data pipeline utilization rate falls below the threshold (monitoring whether the available data buffer capacity has reached a certain utilization level, the process continues at step 420 that the data pipeline station generates a feedback hold signal; the one or more station hold signals used to regulate transmission of data packets upstream from the data pipeline station, by way of inhibiting the transmission of additional data packets into the data pipeline station, para. 0018; otherwise the process reverts back to step 412 that the data packet is processed by the data pipeline station such as the one or more station hold signals are terminated such that data transmission from upstream data pipeline stations resume, para. 0018) in order to facilitate transmission through the data pipeline as taught by Rodgers (para. 0013).
Therefore, based on Ezrielev in view of Rodgers, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Rodgers to the media of Ezrielev in order to facilitate transmission through the data pipeline as taught by Rodgers (para. 0013).
With respect to claim 19, Ezrielev teaches The non-transitory computer readable media of claim 15, wherein the instructions cause the computing platform to:
cause presentation of a user interface screen at a user device, wherein the user interface screen presents a representation of data pipeline utilization information (the system continue to monitor operation of the data pipeline via generation of a second live performance report and a second simulated performance report, para. 0063; graphical user interfaces may be used to (i) enhance stakeholder understanding of operation of the data pipeline, (ii) share information regarding interests in the data pipeline, (iii) make collective decisions, and/or otherwise facilitate management of the data pipelines, para. 0067); and
receive, based on the data pipeline utilization information, rules corresponding to message priorities assigned based on message owner information (performing processes to sort, organize, format, and/or otherwise prepare the data for future processing in the data pipeline, and/or may provide the data to other data processing systems in the data pipeline, para. 0030).
Claims 2, 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ezrielev et al. (US 2025/0004615 A1), hereinafter referred to as Ezrielev, in view of Rodgers et al. (US 2007/0177581 A1), hereinafter referred to as Rodgers, and further in view of Athalye (US 2025/0045135 A1).
With respect to claim 2, Ezrielev in view of Rodgers teaches The system of claim 1 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the instructions further cause the pipeline management tool platform to:
identify a communication pattern of data received from the first application; and
adjust, the threshold based on the communication pattern.
However, Athalye teaches wherein the instructions further cause the pipeline management tool platform to:
identify a communication pattern of data received from the first application (the failure threshold is computed as a function of reliability, number of calls made, and units of time measured, to configure a circuit breaker. In addition, the system can change this circuit breaker configuration based on differences in traffic pattern, thereby allowing thresholds to be set differently for different patterns of traffic, which can be implemented in an automatic manner using a pipeline, para. 0029); and
adjust, the threshold based on the communication pattern (the failure threshold is computed as a function of reliability, number of calls made, and units of time measured, to configure a circuit breaker. In addition, the system can change this circuit breaker configuration based on differences in traffic pattern, thereby allowing thresholds to be set differently for different patterns of traffic, which can be implemented in an automatic manner using a pipeline, para. 0029) in order to allow thresholds to set differently for different patterns of traffic as taught by Athalye (para. 0029).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Athalye, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Athalye to the system of Ezrielev in view of Rodgers in order to allow thresholds to set differently for different patterns of traffic as taught by Athalye (para. 0029).
With respect to claim 9, Ezrielev in view of Rodgers teaches The method of claim 8 as described above,
Ezrielev in view of Rodgers does not explicitly teach further comprising:
identifying a communication pattern of data received from the first application; and
adjusting, the threshold based on the communication pattern.
However, Athalye teaches wherein the instructions further cause the pipeline management tool platform to:
identifying a communication pattern of data received from the first application (the failure threshold is computed as a function of reliability, number of calls made, and units of time measured, to configure a circuit breaker. In addition, the system can change this circuit breaker configuration based on differences in traffic pattern, thereby allowing thresholds to be set differently for different patterns of traffic, which can be implemented in an automatic manner using a pipeline, para. 0029); and
adjusting, the threshold based on the communication pattern (the failure threshold is computed as a function of reliability, number of calls made, and units of time measured, to configure a circuit breaker. In addition, the system can change this circuit breaker configuration based on differences in traffic pattern, thereby allowing thresholds to be set differently for different patterns of traffic, which can be implemented in an automatic manner using a pipeline, para. 0029) in order to allow thresholds to set differently for different patterns of traffic as taught by Athalye (para. 0029).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Athalye, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Athalye to the method of Ezrielev in view of Rodgers in order to allow thresholds to set differently for different patterns of traffic as taught by Athalye (para. 0029).
With respect to claim 16, Ezrielev in view of Rodgers teaches The non-transitory computer readable media of claim 15 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the instructions cause the computing platform to:
identify a communication pattern of data received from the first application; and
adjust, the threshold based on the communication pattern.
However, Athalye teaches wherein the instructions cause the computing platform to:
identify a communication pattern of data received from the first application (the failure threshold is computed as a function of reliability, number of calls made, and units of time measured, to configure a circuit breaker. In addition, the system can change this circuit breaker configuration based on differences in traffic pattern, thereby allowing thresholds to be set differently for different patterns of traffic, which can be implemented in an automatic manner using a pipeline, para. 0029); and
adjust, the threshold based on the communication pattern (the failure threshold is computed as a function of reliability, number of calls made, and units of time measured, to configure a circuit breaker. In addition, the system can change this circuit breaker configuration based on differences in traffic pattern, thereby allowing thresholds to be set differently for different patterns of traffic, which can be implemented in an automatic manner using a pipeline, para. 0029) in order to allow thresholds to set differently for different patterns of traffic as taught by Athalye (para. 0029).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Athalye, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Athalye to the media of Ezrielev in view of Rodgers in order to allow thresholds to set differently for different patterns of traffic as taught by Athalye (para. 0029).
Claims 3-4, 10-11 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ezrielev et al. (US 2025/0004615 A1), hereinafter referred to as Ezrielev, in view of Rodgers et al. (US 2007/0177581 A1), hereinafter referred to as Rodgers, and further in view of Chattopadhyay (US 2025/0224940 A1).
With respect to claim 3, Ezrielev in view of Rodgers teaches The system of claim 1 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the data repository comprises a database.
However, Chattopadhyay teaches wherein the data repository comprises a database (a “pipeline” or “data pipeline” may refer to data processing or a data flow or portion thereof wherein data from various data sources (e.g., a database or flat file) is moved to a data repository, para. 0017) in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Chattopadhyay, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Chattopadhyay to the system of Ezrielev in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
With respect to claim 4, Ezrielev in view of Rodgers teaches The system of claim 1 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the data repository comprises a file.
However, Chattopadhyay teaches wherein the data repository comprises a file (a “pipeline” or “data pipeline” may refer to data processing or a data flow or portion thereof wherein data from various data sources (e.g., a database or flat file) is moved to a data repository, para. 0017) in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Chattopadhyay, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Chattopadhyay to the system of Ezrielev in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
With respect to claim 10, Ezrielev in view of Rodgers teaches The method of claim 8 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the data repository comprises a database.
However, Chattopadhyay teaches wherein the data repository comprises a database (a “pipeline” or “data pipeline” may refer to data processing or a data flow or portion thereof wherein data from various data sources (e.g., a database or flat file) is moved to a data repository, para. 0017) in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Chattopadhyay, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Chattopadhyay to the method of Ezrielev in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
With respect to claim 11, Ezrielev in view of Rodgers teaches The method of claim 8 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the data repository comprises a file.
However, Chattopadhyay teaches wherein the data repository comprises a file (a “pipeline” or “data pipeline” may refer to data processing or a data flow or portion thereof wherein data from various data sources (e.g., a database or flat file) is moved to a data repository, para. 0017) in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Chattopadhyay, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Chattopadhyay to the method of Ezrielev in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
With respect to claim 17, Ezrielev in view of Rodgers teaches The non-transitory computer readable media of claim 15 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the data repository comprises a database.
However, Chattopadhyay teaches wherein the data repository comprises a database (a “pipeline” or “data pipeline” may refer to data processing or a data flow or portion thereof wherein data from various data sources (e.g., a database or flat file) is moved to a data repository, para. 0017) in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Chattopadhyay, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Chattopadhyay to the media of Ezrielev in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
With respect to claim 18, Ezrielev in view of Rodgers teaches The non-transitory computer readable media of claim 15 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the data repository comprises one or both of a file and a database.
However, Chattopadhyay teaches wherein the data repository comprises one or both of a file and a database (a “pipeline” or “data pipeline” may refer to data processing or a data flow or portion thereof wherein data from various data sources (e.g., a database or flat file) is moved to a data repository, para. 0017) in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Chattopadhyay, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Chattopadhyay to the media of Ezrielev in order to improve performance by optimizing a data flow as taught by Chattopadhyay (para. 0030).
Claims 6-7, 13-14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ezrielev et al. (US 2025/0004615 A1), hereinafter referred to as Ezrielev, in view of Rodgers et al. (US 2007/0177581 A1), hereinafter referred to as Rodgers, and further in view of Bladow (US 2023/0142107 A1).
With respect to claim 6, Ezrielev in view of Rodgers teaches The system of claim 5 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the instructions cause the pipeline management tool platform to:
assign, based on message owner information, a priority to each message received from the first application; and
order, based on the priority, each message sent via the data pipeline.
However, Bladow teaches wherein the instructions cause the pipeline management tool platform to:
assign, based on message owner information, a priority to each message received from the first application (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050); and
order, based on the priority, each message sent via the data pipeline (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050) in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Bladow, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Bladow to the system of Ezrielev in view of Rodgers in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
With respect to claim 7, Ezrielev in view of Rodgers, and further in view of Bladow teaches The system of claim 6 as described above,
Further, Bladow teaches wherein the instructions cause the pipeline management tool platform to push, based on the priority, high priority messages from the data repository before low priority messages (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050) in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Bladow, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Bladow to the system of Ezrielev in view of Rodgers in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
With respect to claim 13, Ezrielev in view of Rodgers teaches The method of claim 12 as described above,
Ezrielev in view of Rodgers does not explicitly teach further comprising:
assigning, based on message owner information, a priority to each message received from the first application; and
ordering, based on the priority, each message sent via the data pipeline.
However, Bladow teaches further comprising:
assigning, based on message owner information, a priority to each message received from the first application (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050); and
ordering, based on the priority, each message sent via the data pipeline (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050) in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Bladow, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Bladow to the method of Ezrielev in view of Rodgers in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
With respect to claim 14, Ezrielev in view of Rodgers, and further in view of Bladow teaches The method of claim 13 as described above,
Further, Bladow teaches further comprising pushing, based on the priority, high priority messages from the data repository before low priority messages (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050) in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Bladow, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Bladow to the method of Ezrielev in view of Rodgers in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
With respect to claim 20, Ezrielev in view of Rodgers teaches The non-transitory computer readable media of claim 15 as described above,
Ezrielev in view of Rodgers does not explicitly teach wherein the instructions cause the computing platform to:
assign, based on message owner information, a priority to each message received from the first application;
order, based on the priority, each message sent via the data pipeline; and
push, based on the priority, high priority messages from the data repository before low priority messages.
However, Bladow teaches wherein the instructions cause the computing platform to:
assign, based on message owner information, a priority to each message received from the first application (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050);
order, based on the priority, each message sent via the data pipeline (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050); and
push, based on the priority, high priority messages from the data repository before low priority messages (the data pipeline management system 110 assign data pipelines 112-122 of the same priority to the same environment 102-106, the pipeline execution module 184 execute a set of data pipelines 112-114 with a high priority in environment 102, a set of data pipelines 116 with a medium priority in environment 104, and a set of data pipelines 118-122 with a low priority in environment 106, para. 0050) in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
Therefore, based on Ezrielev in view of Rodgers, and further in view of Bladow, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Bladow to the media of Ezrielev in view of Rodgers in order to improve the performance of data management functionality as taught by Bladow (para. 0049).
Response to Arguments
Applicant's arguments with respect to claims 1-20 have been considered but are moot because the arguments do not apply to any of the references being used in the current rejection.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAO HONG NGUYEN whose telephone number is (571)272-2666. The examiner can normally be reached on Monday-Friday 8AM-4:30PM EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joon H. Hwang can be reached on (571)272-40364036. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/H.H.N/Examiner, Art Unit 2447
February 12, 2026
/JOON H HWANG/Supervisory Patent Examiner, Art Unit 2447