Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/10/2026 has been entered.
Detailed Action
This is a non-final Office Action in response to arguments and amendments filed on 02/10/2026. Claims 1, 9 and 17 are currently amended. Claims 1-20 are pending and examined below.
Response to Arguments
Applicant’s arguments, see pgs. 13-25, filed 02/10/2026, with respect to the rejection(s) of claim(s) 1-20 under 35 USC § 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Lavasani et al. (US Pub. 2019/0392002).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jerzak (US Pub. 2015/0169786) in view of Lavasani et al. (US Pub. 2019/0392002).
Regarding claim 1, Jerzak and Lavasani teach
A method comprising: generating a first data flow diagram according to input channel description information, a structured query language (SQL) statement, and output channel description information, wherein the first data flow diagram comprises a plurality of logical nodes defining at least a first data processing stream, a first sink logical node, wherein the input channel description information defines an input channel, and wherein the first data flow diagram is a temporary logic-level data flow diagram obtained after one-tier compiling based on the SQL statement; (Fig. 1; Par. [0015-6, 34-38] event stream processer contains a data flow diagram that goes from event streams that run through input adapters (i.e. first nodes/input channel description information; #106) into one or more ESP Engines (i.e. intermediate nodes; #102) based on the query plan written by a custom external adapter used for dividing traffic (i.e. intermediate node of temporary logic-level data flow diagram after one-tier compiling) sent to the respective ESP, which uses query plan to produce an output (#114) to the desired destination (i.e. sink; #116A-D))
classifying the logical nodes in the first data flow diagram to obtain a plurality of logical node groups; (Fig. 2; Par. [0017-9] the parsed ESP query of Fig. 1 (#109) is grouped according to operator window (i.e. logical node groups))
selecting, from a preset operator library, one common operator corresponding to each logical node group of the logical node groups, wherein the one common operator implements the multiple different types of functions in the corresponding logical node group; (Fig. 2. Par. [0017-20, 24] operator utilization (i.e. preset operator) is used as a criteria for measuring bottlenecks, and the partitioning module (#111) that calculates operator utilization can include processing logic that includes hardware or software logic (i.e. multiple different types of functions))
generating a second data flow diagram according to the common operator, wherein each common operator in the second data flow diagram implements functions of the multiple logical nodes in the corresponding logical node group; (Fig. 1. Par. [0015-6]; the optimally partitioned query (i.e. according to the common operator) is sent to the query plan generator (i.e. second, executable code-level data flow diagram based on the first diagram) before being sent to and executed by the ESP (#102))
and controlling, according to the second data flow diagram, a worker node of a stream computing system to execute a stream computing task. (Fig. 3. Par. [0020-2] a second DAG (i.e. second data flow diagram) is generated which partitions bottlenecked operator (#206) into two operators (#302, #304) which are used for reducing the bottleneck via splitter operator (#306) and merging operator (#308) (i.e. controlling worker nodes))
Jerzak does not explicitly teach
wherein each logical node group comprises multiple logical nodes which implement multiple different types of functions
However, from the same field, Lavasani teaches
wherein each logical node group comprises multiple logical nodes which implement multiple different types of functions (Fig. 2; Par. [0035, 49, 52-3, 59] a substage plan (#204) is generated based on the execution plan and is used for accelerating the subgraph (i.e. logical node with multiple types of functions) associated with a node of an application program in FPGA hardware)
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to combine the substage planning of Lavasani into the streaming system of Jerzak. The motivation for this combination would have been to accelerate big data operations as explained in Lavasani (Abs., Par. [0004]).
Regarding claim 2, Jerzak and Lavasani teach claim 1 as shown above, and further teaches
The method of claim 1, wherein the first data processing stream comprises a first source logical node, at least one first intermediate logical nodes, a first sink logical node, wherein the plurality of logical nodes further define a second data processing stream comprising a second source logical node, at least one second intermediate logical nodes, and a second sink logical node, wherein the input channel further comprises a second logical channel configured to input a second input data stream to the second source logical node, and wherein the at least one of the at least one second intermediate logical nodes generates input data to both a first intermediate logical node of the at least one first intermediate logical nodes and a second intermediate logical node of the at least one second intermediate logical nodes. (Fig. 1; Figs. 3-4; Par. [0015-6] event stream processer contains a data flow diagram that goes from event streams (i.e. one or more source nodes) that run through input adapters into the ESP Engine (i.e. intermediate logical nodes #102), which uses query planning to produce an output (i.e. sink logical nodes #114), and second DAG (i.e. data flow diagram with directed edges) contains a feed (i.e. sources #202), a number of intermediate operators (#204, 208, 302-8, 400A-404B, 408), and outputs a result (i.e. sink operator))
Regarding claim 3, Jerzak and Lavasani teach claim 2 as shown above, and further teaches
The method of claim 2, wherein the input channel comprises a first logical channel configured to input a first input data stream to the first source logical node, wherein the SQL statement comprises a plurality of SQL rules, and wherein the stream computing method further comprises: generating the first source logical node and the second source logical node in the first data flow diagram according to the input channel description information; (Fig. 1; Par. [0015, 18] paragraphs contain SQL statement which include source information from event streams (i.e. source logical nodes; #104A-E))
generating the at least one first intermediate logical nodes and the at least one second intermediate logical nodes in the first data flow diagram according to select substatements in the SQL rules, wherein the at least one first intermediate logical nodes indicate a first computational logic for processing the first input data stream, and wherein the at least one second intermediate logical nodes indicate a second computational logic for processing the second input data stream; (Fig. 1; Fig. 2 Par. [0015, 18, 34-38] paragraphs contain SQL statement which include intermediate steps like calculating averages (i.e. intermediate logical node) which correspond to one SQL rule (e.g. CREATE SCHEMA averageSchema), and can be run by each ESP engine (i.e. first and second intermediate logical nodes using different computations))
generating the first sink logical node and the second sink logical node in the first data flow diagram according to the output channel description information; (Fig. 1; Fig. 2 Par. [0015, 18, 34-38] paragraphs contain SQL statement which include intermediate steps like calculating averages (i.e. intermediate logical node) which correspond to one SQL rule (e.g. CREATE SCHEMA averageSchema))
and generating first directed edges according to an input substatement in each SQL rule of the SQL rules, an output substatement in each SQL rule of the SQL rules, or both the input substatement and the output substatement, wherein the first directed edges connect the first source logical node to the first sink logical node using the at least one first intermediate logical nodes, and wherein the first directed edges further connect the second source logical node to the second sink logical node using the at least one second intermediate logical nodes. (Fig. 1; Fig. 2 Par. [0015, 18, 34-38] paragraphs contain SQL statement which include output (i.e. sink) steps which correspond to one SQL rule (e.g. CREATE OUTPUT WINDOW) and are based on queries for generating a DAG (i.e. directed edges) like in Figs. 2-4, including any number of sources, sinks, and intermediate steps)
Regarding claim 4, Jerzak and Lavasani teach claim 3 as shown above, and further teaches
The method of claim 3, wherein the second data flow diagram comprises a source operator, an intermediate operator, and a sink operator coupled using second directed edges, wherein the preset operator library comprises a common source operator, a common intermediate operator, and a common sink operator, and wherein the stream computing method further comprises: compiling the common source operator to obtain the source operator in the second data flow diagram; (Fig. 3; Par. [0021-2] Fig. 3 is an optimized (i.e. second compiled) DAG of Fig. 2 DAG containing new splitter (i.e. source operator #306) new partitioned operators (#302-4) and new merge operator (i.e. common sink #308) )
selecting, from the preset operator library, at least one common intermediate operator for each logical node group of the logical node groups; (Par. [0027] the splitter has three strategies (i.e. preset operations; e.g. round robin, hash, custom) for partitioning the data stream to a partitioned operator (i.e. intermediate operator))
compiling the at least one common intermediate operator to obtain the intermediate operator in the second data flow diagram; (Fig. 3; Par. [0027-30] each strategy (i.e. intermediate operator) is used as illustrated in DAG of Fig. 3)
compiling, the common sink operator to obtain the sink operator in the second data flow diagram; (Fig. 3; Par. [0032] the merge operator (i.e. sink operator #308) performs a logical union (i.e. compiling) and is illustrated in Fig. 3 DAG)
and generating the second directed edges according to the first directed edges. (Fig. 3; Par. [0021-2] Fig. 3 is an optimized DAG (i.e. generated) of Fig. 2 DAG (i.e. according to the first directed edges) )
Regarding claim 5, Jerzak and Lavasani teach claim 4 as shown above, and further teaches
The method of claim 4, further comprising: receiving, from a client, modification information for modifying an SQL rule of the SQL rules; (Fig. 3; Par. [0030-1] in the custom strategy receives user input (i.e. client modification of the SQL rules; e.g. information about range in range-based partitioning scheme))
and adding, modifying, or deleting, according to the modification information, an intermediate operator in the second data flow diagram. (Fig. 3; Par. [0021-2] Fig. 3 is an optimized (i.e. modified) DAG (i.e. generated) of Fig. 2 DAG)
Regarding claim 6, Jerzak and Lavasani teach claim 4 as shown above, and further teaches
The method of claim 4, further comprising: receiving, from a client, modification information for modifying the input channel description information; (Fig. 3; Par. [0030-1] in the custom strategy receives user input (i.e. client modification of the channel description; e.g. numbers of ranges and partitions))
and adding, modifying, or deleting the source operator in the second data flow diagram according to the modification information. (Fig. 3; Par. [0021-2] Fig. 3 is an optimized (i.e. modified) DAG (i.e. generated) of Fig. 2 DAG)
Regarding claim 7, Jerzak and Lavasani teach claim 4 as shown above, and further teaches
The method of claim 4, further comprising: receiving, from a client, modification information for modifying the output channel description information; (Fig. 3; Par. [0032] merge operator (i.e. output channel) can be augmented (i.e. user input) to sort output)
and adding, modifying, or deleting the sink operator in the second data flow diagram according to the modification information. (Fig. 3; Par. [0021-2] Fig. 3 is an optimized (i.e. modified) DAG (i.e. generated) of Fig. 2 DAG)
Regarding claim 8, Jerzak and Lavasani teach claim 1 as shown above, and further teaches
The method of claim 1, wherein controlling the worker node to execute the stream computing task comprises: scheduling each operator in the second data flow diagram to at least one worker node of a plurality of working nodes in the stream computing system; (Fig. 3. Par. [0020-2, 34] each operator in second DAG (i.e. second data flow diagram) is generated, and operators can be grouped into modules (i.e. worker nodes) which process the stream data)
generating, according to an output data stream of each operator in the second data flow diagram, subscription publication information corresponding to the operator, wherein the subscription publication information indicates a manner of sending a respective output data stream corresponding to the operator; (Fig. 1; Par. [0015-6, 48] the output of the ESP engine (#112) after executing optimized query plan (i.e. according to output data stream of operators) contains output (#114) instructions including storing data for later user or sending data to various destinations (#116A-D))
configuring, for each operator in the second data flow diagram, the subscription publication information for the operator; (Fig. 3; Par. [0021-2] each operator in the second DAG (Fig. 3) has the output (i.e. subscription publication information) of the operator determined according to the optimized query criteria)
generating, for each operator in the second data flow diagram and according to an input data stream of the operator, input stream definition information corresponding to the operator, wherein the input stream definition information indicates a manner of receiving a respective input data stream corresponding to the operator; (Par. [0020, 27-30] after determining where the bottleneck is and isn't (i.e. input stream definition info) in the stream processing (i.e. according to the input stream of the operator), splitter (i.e. manner of receiving) is used to reduce load on high load operators and low load operators receive data as normal (i.e. another manner of receiving))
and configuring, for each operator in the second data flow diagram, the input stream definition information for the operator. (Par. [0020, 27-30] after determining where the bottleneck is and isn't (i.e. input stream definition info for the operators) in the stream processing (i.e. according to the input stream of the operator), splitter is used to reduce load on high load operators and low load operators receive data as normal)
Regarding claim 9, claim language is slightly different than claim 1, but is rejected under a similar rationale. Jerzak further teaches
a memory configured to store instructions (Fig. 8 #804-6);
and a processor coupled to the memory (Fig. 8 #802)
Regarding claim 10, claim language is slightly different than claim 2, but is rejected under a similar rationale.
Regarding claim 11, claim language is slightly different than claim 3, but is rejected under a similar rationale.
Regarding claim 12, claim language is slightly different than claim 4, but is rejected under a similar rationale.
Regarding claim 13, claim language is slightly different than claim 5, but is rejected under a similar rationale.
Regarding claim 14, claim language is slightly different than claim 6, but is rejected under a similar rationale.
Regarding claim 15, claim language is slightly different than claim 7, but is rejected under a similar rationale.
Regarding claim 16, claim language is slightly different than claim 8, but is rejected under a similar rationale.
Regarding claim 17, claim language is slightly different than claim 9, but is rejected under a similar rationale.
Regarding claim 18, claim language is slightly different than claim 2, but is rejected under a similar rationale.
Regarding claim 19, claim language is slightly different than claim 3, but is rejected under a similar rationale.
Regarding claim 20, claim language is slightly different than claim 4, but is rejected under a similar rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to J MITCHELL CURRAN whose telephone number is (469)295-9081. The examiner can normally be reached M-F 8:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J MITCHELL CURRAN/Examiner, Art Unit 2169
/SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169