Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 2 – 21 are pending.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 2 – 7, 9 – 10, 14 – 15 and 17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 – 3, 6, 8 – 9, 13 and 15 – 16 of U.S. Patent No. 12,141,183. Although the claims at issue are not identical, they are not patentably distinct from each other because of the mapping below.
Current Application 18/942,065
U.S. Patent No. 12,141,183
2. (New) A method comprising: receiving, by a data intake and query system, a query identifying a set of data to be processed and a manner of processing the set of data; defining, by the data intake and query system, a query processing scheme, the query processing scheme including: first instructions to dynamically allocate a first subset of a set of processors to process the set of data based at least in part on the query identifying the manner of processing the set of data, wherein the set of data is associated with a plurality of dataset sources, and second instructions to dynamically allocate a second subset of the set of processors to output results of processing the set of data; and executing the query based at least in part on the query processing scheme.
1. A method comprising: receiving, by a data intake and query system, a query identifying a set of data to be processed and a manner of processing the set of data; defining, by the data intake and query system, a query processing scheme for obtaining and processing the set of data, the query processing scheme including: first instructions to dynamically allocate a first subset of a set of processors to interact with one or more first dataset sources and obtain, from the one or more first dataset sources, a first subset of the set of data based at least in part on the query identifying the set of data to be processed; second instructions to dynamically allocate a second subset of the set of processors to interact with one or more second dataset sources and obtain, from the one or more second dataset sources, a second subset of the set of data based at least in part on the query identifying the set of data to be processed; and third instructions to dynamically allocate a third subset of the set of processors to process the set of data based at least in part on the query identifying the manner of processing the set of data, the set of data obtained at least in part by the first subset of the set of processors and the second subset of the set of processors; and executing the query based at least in part on the query processing scheme.
3. (New) The method of claim 2, wherein the set of data is obtained at least in part by a third subset of the set of processors.
1. A method comprising: receiving, by a data intake and query system, a query identifying a set of data to be processed and a manner of processing the set of data; defining, by the data intake and query system, a query processing scheme for obtaining and processing the set of data, the query processing scheme including: first instructions to dynamically allocate a first subset of a set of processors to interact with one or more first dataset sources and obtain, from the one or more first dataset sources, a first subset of the set of data based at least in part on the query identifying the set of data to be processed; second instructions to dynamically allocate a second subset of the set of processors to interact with one or more second dataset sources and obtain, from the one or more second dataset sources, a second subset of the set of data based at least in part on the query identifying the set of data to be processed; and third instructions to dynamically allocate a third subset of the set of processors to process the set of data based at least in part on the query identifying the manner of processing the set of data, the set of data obtained at least in part by the first subset of the set of processors and the second subset of the set of processors; and executing the query based at least in part on the query processing scheme.
4. (New) The method of claim 2, wherein the query processing scheme further includes: third instructions to dynamically allocate a third subset of the set of processors to obtain the set of data from the plurality of dataset sources.
1. A method comprising: receiving, by a data intake and query system, a query identifying a set of data to be processed and a manner of processing the set of data; defining, by the data intake and query system, a query processing scheme for obtaining and processing the set of data, the query processing scheme including: first instructions to dynamically allocate a first subset of a set of processors to interact with one or more first dataset sources and obtain, from the one or more first dataset sources, a first subset of the set of data based at least in part on the query identifying the set of data to be processed; second instructions to dynamically allocate a second subset of the set of processors to interact with one or more second dataset sources and obtain, from the one or more second dataset sources, a second subset of the set of data based at least in part on the query identifying the set of data to be processed; and third instructions to dynamically allocate a third subset of the set of processors to process the set of data based at least in part on the query identifying the manner of processing the set of data, the set of data obtained at least in part by the first subset of the set of processors and the second subset of the set of processors; and executing the query based at least in part on the query processing scheme.
5. (New) The method of claim 2, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more processors of a dataset destination.
16. The method of claim 1, wherein executing the query comprises receiving results from the set of processors, performing at least one operation on the results, wherein the at least one operation comprises at least one of collating the results or processing the results, and communicating output of the at least one operation to a computing device
6. (New) The method of claim 2, wherein the query further identifies a dataset destination for storage of the results of processing the set of data.
6. The method of claim 1, wherein a fourth subset of the set of processors are dynamically allocated to communicate data to at least one dataset destination.
7. (New) The method of claim 2, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more first processors of a first dataset destination, wherein the query processing scheme further includes: third instructions to dynamically allocate a third subset of the set of processors to output the results of processing the set of data to one or more second processors of a second dataset destination.
3. The method of claim 1, wherein the query processing scheme further includes instructions to dynamically allocate the set of processors to multiple layers of processors to execute the query, wherein the first subset of the set of processors and the second subset of the set of processors are dynamically allocated to a first layer of processors of the multiple layers of processors based at least in part on the first instructions and the second instructions, and wherein the third subset of the set of processors are dynamically allocated to a second layer of processors of the multiple layers of processors based at least in part on the third instructions.
9. (New) The method of claim 2, further comprising: determining one or more of a processing capability or a number of processors associated with a dataset destination, wherein defining the query processing scheme comprises defining the query processing scheme based at least in part on the one or more of the processing capability or the number of processors associated with the dataset destination, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more processors of the dataset destination.
8. The method of claim 1, wherein defining the query processing scheme comprises determining a processing capability of a dataset source of the one or more first dataset sources or the one or more second dataset sources and generating a subquery for the dataset source based at least in part on determining the processing capability, the subquery identifying at least a subset of the set of data to be processed and a manner of processing the at least a subset of the set of data.
10. (New) The method of claim 2, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more of an ingested data buffer, an external data source, or a query acceleration data store.
13. The method of claim 1, wherein the one or more first dataset sources and the one or more second dataset sources comprise at least one of a plurality of indexers or an ingested data buffer and at least one processor of the set of processors is dynamically allocated to obtain at least a subset of the set of data from the at least one of the plurality of indexers or the ingested data buffer based at least in part on one or more of the first instructions or the second instructions.
14. (New) The method of claim 2, wherein the set of processors comprises multiple layers of processors.
2. The method of claim 1, wherein the query processing scheme further includes instructions to dynamically allocate the set of processors to multiple layers of processors to execute the query.
15. (New) The method of claim 2, wherein executing the query comprises: generating instructions for execution by at least a portion of the set of processors based at least in part on the query processing scheme; and communicating the instructions to the at least a portion of the set of processors.
9. The method of claim 1, wherein defining the query processing scheme comprises generating instructions for execution by the set of processors, and wherein executing the query comprises communicating the instructions to the set of processors.
17. (New) The method of claim 2, wherein defining the query processing scheme comprises: generating directed acyclic graph instructions.
15. The method of claim 1, wherein defining the query processing scheme comprises generating directed acyclic graph instructions to execute the query on the set of processors, and wherein executing the query comprises communicating the directed acyclic graph instructions to the set of processors.
Claim 18 is rejected using similar rationale to the mapping of claim 2 above.
Claim 19 is rejected using similar rationale to the mapping of claim 4 above.
Claim 20 is rejected using similar rationale to the mapping of claim 2 above.
Claim 21 is rejected using similar rationale to the mapping of claim 5 above.
Claims 2 – 7, 9 – 10, 14 – 15 and 17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 9, 10, 24 and 26 of U.S. Patent No. 11,281,706. Although the claims at issue are not identical, they are not patentably distinct from each other because of similar mapping to that of U.S. Patent No. 12,141,183 above, and as previously mapped during prosecution of U.S. Patent No. 12,141,183.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 2 – 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20160085810 issued to Alexandre de Castro Alves et al (hereinafter referred to as Alves) in view of U.S. Patent Application Publication No. 2017/0026441 issued to Christopher Moudy et al (hereinafter referred to as Moudy).
As to claim 2, Alves discloses a method comprising:
receiving, by a data intake and query system, a query identifying a set of data to be processed and a manner of processing the set of data (execute a query on an event stream processing server, see Alves: Para. 0049);
defining, by the data intake and query system, a query processing scheme (event processing application comprising query execution logic that is rule driven, see Alves: Para. 0049 – 0052), the query processing scheme including:
dynamically allocate a first subset of a set of processors to process the set of data based at least in part on the query (processing application distrusted across disparate processing nodes, see Alves: Para. 0064, and dynamically grow and shrink resources based on increase/decreases in demand, see Alves: Para. 0097), wherein the set of data is associated with a plurality of dataset sources (events being processed received from one or more even sources, see Alves: Para. 0047 – 0052); and
executing the query based at least in part on the query processing scheme (processing the query to generate output event streams, see Alves: Para. 0049 - 0052).
However, Alves does not explicitly disclose first instructions to dynamically allocate a first subset of a set of processors to process the set of data based at least in part on the query identifying the manner of processing the set of data; and second instructions to dynamically allocate a second subset of the set of processors to output results of processing the set of data.
Moudy teaches first instructions to dynamically allocate a first subset of a set of processors to process the set of data based at least in part on the query identifying the manner of processing the set of data (tasks/sub-tasks are dynamically allocated to sets of processors based on resource needs, tasks may be assigned so that a first set of tasks are assigned to a first set of processors and a second set of tasks, that a first set does not depend from, are assigned to a second set of processors, see Moudy: Para. 0093 - 0098, and tasks may include receiving of messages and processing of received messages, see Moudy: Para. 0103 - 0164), and
second instructions to dynamically allocate a second subset of the set of processors to output results of processing the set of data (tasks/sub-tasks are dynamically allocated to sets of processors based on resource needs, tasks may be assigned so that a first set of tasks are assigned to a first set of processors and a second set of tasks, that a first set does not depend from, are assigned to a second set of processors, see Moudy: Para. 0093 - 0098, and tasks may include receiving of messages and processing of received messages, see Moudy: Para. 0103 - 0164).
Alves and Moudy are analogous for their disclosure of allocation of resources for executing multi-part query plans/logic.
Therefore, it would have been obvious to one of ordinary skill in the art to modify the query execution logic for event input streams of Alves with the allocation of processors for executing tasks based on resources needed for the tasks/sub-tasks of Moudy in order to process sets of data in real-time by using a distributed network to generate and process partitioned streams.
As to claim 3, Alves modified by Moudy discloses the method of claim 2, wherein the set of data is obtained at least in part by a third subset of the set of processors (event processing application comprising query execution logic that is rule driven including processing input streams from event sources, see Alves: Para. 0049 and 0052, and processing applications distributed across disparate processing nodes, see Alves: Para. 0064, and storing events in one or more database of a distributed system, see Alves: Para. 00274 – 0275, such as cloud database services, see Para. 0283 – 0284, and tasks/sub-tasks are dynamically allocated to sets of processors based on resource needs, tasks may be assigned so that a first set of tasks are assigned to a first set of processors and a second set of tasks, that a first set does not depend from, are assigned to a second set of processors, see Moudy: Para. 0093 – 0098, and tasks may include receiving of messages and processing of received messages, see Moudy: Para. 0103 - 0105).
As to claim 4, Alves modified by Moudy discloses the method of claim 2, wherein the query processing scheme further includes: third instructions to dynamically allocate a third subset of the set of processors to obtain the set of data from the plurality of dataset sources (tasks/sub-tasks are dynamically allocated to sets of processors based on resource needs, tasks may be assigned so that a first set of tasks are assigned to a first set of processors and a second set of tasks, that a first set does not depend from, are assigned to a second set of processors, see Moudy: Para. 0093 – 0098, and tasks may be assigned to task processers for two input streams processed in parallel, see Moudy: Para. 0110 – 0128 and 0140 – 0154).
As to claim 5, Alves modified by Moudy discloses the method of claim 2, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more processors of a dataset destination (event processing application comprising query execution logic that is rule driven including processing input streams from event sources, see Alves: Para. 0049 and 0052, and processing applications distributed across disparate processing nodes, see Alves: Para. 0064, and storing events in one or more database of a distributed system, see Alves: Para. 00274 – 0275, such as cloud database services, see Para. 0283 – 0284, and tasks/sub-tasks are dynamically allocated to sets of processors based on resource needs, tasks may be assigned so that a first set of tasks are assigned to a first set of processors and a second set of tasks, that a first set does not depend from, are assigned to a second set of processors, see Moudy: Para. 0093 – 0098, and assign task processors for task-output results, see Moudy: Para. 0155 – 0158).
As to claim 6, Alves modified by Moudy discloses the method of claim 2, wherein the query further identifies a dataset destination for storage of the results of processing the set of data (storing results of processing in storage that may be retrieved for other tasks, see Moudy: Para. 0106 – 0109).
As to claim 7, Alves modified by Moudy discloses the method of claim 2, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more first processors of a first dataset destination (the configuration may include sending communications to various processors that identify source and/or destination addresses that correspond to other task processors or task-specific identifiers). These communications may include, for example, a processing instruction that indicates that data is to be requested from a particular address and/or that data is to be transmitted to a particular address, see Moudy: Para. 0155 – 0158), wherein the query processing scheme further includes:
third instructions to dynamically allocate a third subset of the set of processors to output the results of processing the set of data to one or more second processors of a second dataset destination (tasks/sub-tasks are dynamically allocated to sets of processors based on resource needs, tasks may be assigned so that a first set of tasks are assigned to a first set of processors and a second set of tasks, that a first set does not depend from, are assigned to a second set of processors, see Moudy: Para. 0093 – 0098, and assign task processors for task-output results, see Moudy: Para. 0155 – 0158).
As to claim 8, Alves modified by Moudy discloses the method of claim 2, wherein the second instructions comprise instructions to dynamically allocate at least two processors of the second subset of the set of processors to concurrently communicate a subset of the results of processing the set of data to a single processor associated with a dataset destination (tasks/sub-tasks are dynamically allocated to sets of processors based on resource needs, tasks may be assigned so that a first set of tasks are assigned to a first set of processors and a second set of tasks, that a first set does not depend from, are assigned to a second set of processors, see Moudy: Para. 0093 – 0098, and tasks may be assigned to task processers for two input streams processed in parallel, see Moudy: Para. 0110 – 0128 and 0140 – 0154).
As to claim 9, Alves modified by Moudy discloses the method of claim 2, further comprising:
determining one or more of a processing capability or a number of processors associated with a dataset destination, wherein defining the query processing scheme comprises defining the query processing scheme based at least in part on the one or more of the processing capability or the number of processors associated with the dataset destination, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more processors of the dataset destination (The determination can be made based on (for example) the data sets, other content in the communication, metadata of the communication, load-balancing factors (e.g., a current latency, queue length, backlog, etc. of one or more partitions), configurations of sets of task processors associated with each partition controller (e.g., a number of processors, processing capability, etc.), processing workflows associated with the data sets, and/or workflow priorities. In some instances, a tag or characteristic and a partition-routing protocol determines which partition controller is to receive the data set, see Moudy: Para. 0104 – 0105).
As to claim 10, Alves modified by Moudy discloses the method of claim 2, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more of an ingested data buffer, an external data source, or a query acceleration data store (event processing application comprising query execution logic that is rule driven including processing input streams from event sources, see Alves: Para. 0049 and 0052, and processing applications distributed across disparate processing nodes, see Alves: Para. 0064, and indexing the right thread for execution, see Alves: Para. 0165, indexing layer, see Para. 0086, and tasks/sub-tasks are dynamically allocated to sets of processors based on resource needs, tasks may be assigned so that a first set of tasks are assigned to a first set of processors and a second set of tasks, that a first set does not depend from, are assigned to a second set of processors, see Moudy: Para. 0093 – 0098, and tasks may include receiving of messages and processing of received messages, see Moudy: Para. 0103 - 0105).
As to claim 11, Alves modified by Moudy discloses the method of claim 2, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more processors of a dataset destination (the configuration may include sending communications to various processors that identify source and/or destination addresses that correspond to other task processors or task-specific identifiers). These communications may include, for example, a processing instruction that indicates that data is to be requested from a particular address and/or that data is to be transmitted to a particular address, see Moudy: Para. 0155 – 0158), the method further comprising:
monitoring the dataset destination (Managing the task processors can include adding or removing one or more task processors from a set of task processors, which can include requesting that a virtual machine or virtual server be added or removed from a virtual system. Managing the task processors can include generating and transmitting an instruction to one or more processors that identifies a source device from which data will be provided; task processing to be performed; which processing results are to be stored (e.g., and where) and/or a destination device to which processing results are to be provided. In some instances, managing the task processors includes configuring a new inter-task stream or modifying a configuration of an existing inter-task stream. For example, if initially a set of tasks were to be performed by individual processors, and the tasks were later divided across processors, an inter-task stream that includes pertinent results may be established between the processors, see Moudy: Para. 0165).
As to claim 12, Alves modified by Moudy discloses the method of claim 2, wherein the second instructions comprise instructions to dynamically allocate the second subset of the set of processors to output the results of processing the set of data to one or more processors of a dataset destination (the configuration may include sending communications to various processors that identify source and/or destination addresses that correspond to other task processors or task-specific identifiers). These communications may include, for example, a processing instruction that indicates that data is to be requested from a particular address and/or that data is to be transmitted to a particular address, see Moudy: Para. 0155 – 0158), the method further comprising:
monitoring the dataset destination (Managing the task processors can include adding or removing one or more task processors from a set of task processors, which can include requesting that a virtual machine or virtual server be added or removed from a virtual system. Managing the task processors can include generating and transmitting an instruction to one or more processors that identifies a source device from which data will be provided; task processing to be performed; which processing results are to be stored (e.g., and where) and/or a destination device to which processing results are to be provided. In some instances, managing the task processors includes configuring a new inter-task stream or modifying a configuration of an existing inter-task stream. For example, if initially a set of tasks were to be performed by individual processors, and the tasks were later divided across processors, an inter-task stream that includes pertinent results may be established between the processors, see Moudy: Para. 0165); and
updating the second subset of the set of processors based on monitoring the dataset destination (Each processor may have or may be assigned (e.g., as part of the configuration) an address. Thus, a task management engine may use these addresses to monitor task processing and to assign new task iterations to particular processors. An address for the task management engine may be further established (or a task-specific identifier may be sent to each task-associated processor), such that other devices can send task-input results or data sets and/or can request task-output results. The configuration may include sending communications to various processors that identify source and/or destination addresses that correspond to other task processors or task-specific identifiers). These communications may include, for example, a processing instruction that indicates that data is to be requested from a particular address and/or that data is to be transmitted to a particular address, see Moudy: Para. 0156, and Managing the task processors may include a repeated performance analysis. For example, a processing latency or per-processor statistic (e.g., average processor or memory usage) could be monitored to detect an above-threshold value or above-threshold derivative value (or below-threshold value or below-threshold derivative value). The threshold comparison can influence whether new servers (or other processors) are recruited to or released from a system, see Moudy: Para. 0166).
As to claim 13, Alves modified by Moudy discloses the method of claim 2, wherein defining the query processing scheme comprises:
defining the query processing scheme based on query requirements associated with the query, resources of the data intake and query system, and a dataset destination for output of the results of processing the set of data (assigning task processors for processing created workflows that specify how data sets are to be processed, and communicating from source addresses and outputting to destination address, see Moudy: Para. 0140 – 0158).
As to claim 14, Alves modified by Moudy discloses the method of claim 2, wherein the set of processors comprises multiple layers of processors (processing applications distributed across disparate processing nodes, see Alves: Para. 0064, and dynamically grow and shrink resources on based on increase/decreases in demand, see Alves: Para. 0097).
As to claim 15, Alves modified by Moudy discloses the method of claim 2, wherein executing the query comprises:
generating instructions for execution by at least a portion of the set of processors based at least in part on the query processing scheme (creating workflows for task processers for the input streams, see Moudy: Para. 0140 – 0158); and
communicating the instructions to the at least a portion of the set of processors (assigning task processors for processing the workflows and communicating from source addresses and outputting to destination address, see Moudy: Para. 0140 – 0158).
As to claim 16, Alves modified by Moudy discloses the method of claim 2, wherein executing the query comprises:
executing a first phase of the query using the first subset of the set of processors based at least in part on the query processing scheme (distribution of the continuous query processing model and determining order for downstream queries in the distribution flow, see Alves: Para. 0128 – 0131 and 0142 – 0143, the order for processing queries of the continuous query for upstream and downstream nodes are designated); and
executing a second phase of the query using the second subset of the set of processors based at least in part on the query processing scheme (distribution of the continuous query processing model and determining order for downstream queries in the distribution flow, see Alves: Para. 0128 – 0131 and 0142 – 0143, the order for processing queries of the continuous query for upstream and downstream nodes are designated).
As to claim 17, Alves modified by Moudy discloses the method of claim 2, wherein defining the query processing scheme comprises: generating directed acyclic graph instructions (representing/executing the event processing as a directed acyclic graph, see Alves: Para. 0098).
Claim 18 is rejected using similar rationale to the rejection of claim 2 above.
Claim 19 is rejected using similar rationale to the rejection of claim 4 above.
Claim 20 is rejected using similar rationale to the rejection of claim 2 above.
Claim 21 is rejected using similar rationale to the rejection of claim 5 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK E HERSHLEY whose telephone number is (571)270-7774. The examiner can normally be reached M-F: 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at (571) 270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK E HERSHLEY/Primary Examiner, Art Unit 2164