Prosecution Insights
Last updated: April 19, 2026
Application No. 18/419,282

SEARCH TABLE AND SEARCH JOB FOR SEARCH QUERY FOR WHICH MATCHING EVENTS ARE TO BE CONTINUALLY PROVIDED

Final Rejection §103
Filed
Jan 22, 2024
Examiner
ELIAS, EARL L
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Micro Focus LLC
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
56 granted / 99 resolved
+1.6% vs TC avg
Strong +24% interview lift
Without
With
+23.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
19 currently pending
Career history
118
Total Applications
across all art units

Statute-Specific Performance

§101
28.7%
-11.3% vs TC avg
§103
52.9%
+12.9% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action has been issued in response to Applicant’s Communication of application S/N 18/419,282 filed on January 13, 2026. Claims 1-20 are currently pending with the application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 3, 14-17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bacthavachalu et al. (U.S. Patent No.: US 8150889 B1) hereinafter Bacthavachalu, in view of Yagawa et al. (U.S. Patent No.: US 6792425 B2) hereinafter Yagawa, and further in view of Edgar et al. (U.S. Publication No.: US 20130132357 A1) hereinafter Edgar. As to claim 1: Bacthavachalu discloses: A non-transitory computer-readable data storage medium storing program code executable by a processor to perform processing comprising: instantiating a search table for a search query [Column 10 Lines 9-11 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table. Note: The table of parent node (search table) reads on the claims.] for which matching events of a plurality of events stored in an events table [Column 10 Lines 2-3 teach each node in this example processes a table from a local data store, here a "Logs" table. Column 10 Lines 5-6 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes. Note: Query results stored in log tables for child nodes that are copied (match) into the parent node table reads on the claims.], are to be continually provided as new events are continually loaded into the events table [Column 5 Lines 48-51 teach such a system can provide continuous loading with functions such as data retention and querying that typically are not available for large data systems.] the matching events satisfying the search query [Column 10 Lines 5-6 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node.], the search table to store the matching events [Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.] generating a search job for the search query [Column 9 Lines 57-60 teach when the user submits an instance query, a series of jobs will be created in the system as discussed above that are sent to specific nodes that will execute the instance query on the local data stores.] the search job to be continually run to retrieve the matching events stored in the events table that are not already stored in the search table and to insert the retrieved matching events in the search table [Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.] and continually running the search job, such that the matching events are continually provided from the search table and not from the events table [Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 5-13 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node. The results in the temporary tables then can be transferred to the assigned parent node as designated by the scheduling service. When the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes. Note: Continuously processing queries and providing results from the parent node temp table and not the child node temporary tables reads on the claims.] Bacthavachalu discloses all of the limitations as set forth in claim 1 but does not appear to expressly disclose plurality of events generated by a plurality of different computing devices, received from the different computing devices and loaded into the events table, the matching events satisfying the search query, and once the search table has been instantiated, generating a search job for the search query. Yagawa discloses: plurality of events generated by a plurality of different computing devices [Columng 12 Lines 57-60 teach in the multi-database server 1, the MDB result table integration processing 13 integrates respective external DB result tables 52a and 52b.] received from the different computing devices and loaded into the events table, the matching events satisfying the search query [Column 12 Lines 57-60 teach in the multi-database server 1, the MDB result table integration processing 13 integrates respective external DB result tables 52a and 52b.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Bacthavachalu, by incorporating integrating respective external DB result tables, as taught by Yagawa (see Column 12 Lines 57-60), because both applications are directed to query processing; incorporating integrating respective external DB result tables provides an advantageous effect that the availability of the system is increased can be obtained (see Yagawa Column 16 Lines 56-57). Bacthavachalu and Yagawa discloses all of the limitations as set forth in claim 1 and some of claim 4 but does not appear to expressly disclose once the search table has been instantiated, generating a search job for the search query. Edgar discloses: once the search table has been instantiated, generating a search job for the search query [Paragraph 0023 teaches related-search CTRs and follow-up CTRs may be computed and stored on a backend server or database and accessed via jobs or script queries. The underlying data for the related-search CTRs and follow-up CTRs, in one embodiment, is taken from historical user log or search data. Paragraph 0050 teaches a server to find refined search queries for the user's search query. To do so, the server may access internally stored refined search tables that are generated after mining a data center storing historical log and search data from users of a different search engine. Note: Accessing a search table after it has already been generated using job or query scripts reads on the claims.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Bacthavachalu and Yagawa, by incorporating accessing a search table after it has already been generated using job or query scripts, as taught by Edgar (see Paragraph 0023 and 0050), because the three applications are directed to query processing; incorporating integrating respective external DB result tables allows the user to easily run the refined search on the different search engines (see Edgar Abstract). Claim 15 recites similar limitations as in claim 1. Therefore claim 15 is rejected for the same reasons as set forth above. See claim 1 for analysis. As to clam 2: Bacthavachalu discloses: The non-transitory computer-readable data storage medium of claim 1, wherein the processing further comprises: detecting addition of an entry in a search query table, the entry corresponding to the search query, wherein the search table for the search query is instantiated in response to detecting the addition of the entry corresponding to the search query in the search query table [Column 10 Lines 9-11 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table.] As to claim 3: Bacthavachalu discloses: The non-transitory computer-readable data storage medium of claim 1, wherein a client computing device that generated the search query displays a user interface for the search query populated with the matching events from the search table [Column 4 Lines 13-18 teaches the framework in this embodiment also provides a dashboard interface 208 that allows users or non-developers to manually select or otherwise specify or enter queries or instructions for utilizing the system. In one example, the dashboard in a simple graphical user interface (GUI).], wherein, as the search job is continually run to retrieve new matching events from the events table and to insert the new matching events in the search table, the client computing device continually retrieves the new matching events from the search table and not from the events table [Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 5-13 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node. The results in the temporary tables then can be transferred to the assigned parent node as designated by the scheduling service. When the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.], and continually updates the user interface by automatically displaying the new matching events in the user interface [Column 7 Lines 7-11 teach each node includes a query library 232 in order to execute the jobs, such that the client component 212 is able to receive information for the job, determine how to execute the job, then execute the job and pass the results on to the appropriate node. Column 7 Lines 35-40 teach when loading data through a dashboard, for example, a user can be presented with an interface page 300 such as the example illustrated in FIG. 3. The interface can have tabs or other options used for loading 302 and querying 304 the data. In this example, a user can create and submit a Web services call to be submitted to load the data. Column 8 Lines 23-26 teaches the dashboard can provide status information such as what percentage of the data is loaded and whether any nodes have failed, for example. Note: Providing a continuous status as to the various table loads associated with queries in a dashboard on a client reads on the claims.] As to claim 14: Bacthavachalu discloses: The non-transitory computer-readable data storage medium of claim 1, wherein the search query is one of a plurality of different search queries, such that for each different search query a different search table is instantiated and a different search job is generated, and wherein the different search jobs are continually run in parallel with one another [Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.] As to claim 16: Bacthavachalu discloses: The computing system of claim 15, wherein the database computing device is a different device than the first computing device [Column 2 Lines 35-38 teaches "data store" refers to any device or combination of devices capable of storing, accessing, and retrieving data, which may include any combination and number of data servers, databases.] As to claim 17: Bacthavachalu discloses: The computing system of claim 15, wherein the first computing device is the database computing device [Column 2 Lines 35-38 teaches "data store" refers to any device or combination of devices capable of storing, accessing, and retrieving data, which may include any combination and number of data servers, databases.] As to claim 20: Bacthavachalu discloses: A method comprising: instantiating a search table for a search query [Column 10 Lines 9-11 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table. Note: The table of parent node (search table) reads on the claims.] for which matching events of a plurality of events stored in an events table [Column 10 Lines 2-3 teach each node in this example processes a table from a local data store, here a "Logs" table. Column 10 Lines 5-6 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes. Note: Query results stored in log tables for child nodes that are copied (match) into the parent node table reads on the claims.] are to be continually provided as new events are continually loaded into the events table [Column 5 Lines 48-51 teach such a system can provide continuous loading with functions such as data retention and querying that typically are not available for large data systems.], the matching events satisfying the search query [Column 10 Lines 5-6 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node.], the search table to store the matching events [Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.]; generating a search job for the search query [Column 9 Lines 57-60 teach when the user submits an instance query, a series of jobs will be created in the system as discussed above that are sent to specific nodes that will execute the instance query on the local data stores.], the search job to be continually run to retrieve the matching events stored in the events table that are not already stored in the search table and to insert the retrieved matching events in the search table [Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.]; continually running the search job, such that the matching events are continually provided from the search table and not from the events table [Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 5-13 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node. The results in the temporary tables then can be transferred to the assigned parent node as designated by the scheduling service. When the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes. Note: Continuously processing queries and providing results from the parent node temp table and not the child node temporary tables reads on the claims.]; and continually updating a user interface for the search query as the matching events are provided from the search table, the new matching events are automatically displayed in the user interface [Column 7 Lines 7-11 teach each node includes a query library 232 in order to execute the jobs, such that the client component 212 is able to receive information for the job, determine how to execute the job, then execute the job and pass the results on to the appropriate node. Column 7 Lines 35-40 teach when loading data through a dashboard, for example, a user can be presented with an interface page 300 such as the example illustrated in FIG. 3. The interface can have tabs or other options used for loading 302 and querying 304 the data. In this example, a user can create and submit a Web services call to be submitted to load the data. Column 8 Lines 23-26 teaches the dashboard can provide status information such as what percentage of the data is loaded and whether any nodes have failed, for example. Note: Providing a continuous status as to the various table loads associated with queries in a dashboard on a client reads on the claims.] wherein as the search job is continually run to retrieve new matching events from the events table and to insert the new matching events in the search table [Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 5-13 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node. The results in the temporary tables then can be transferred to the assigned parent node as designated by the scheduling service. When the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes. Note: Continuously processing queries and providing results from the parent node temp table and not the child node temporary tables reads on the claims.]. Claim(s) 4, 5, 6, 10, 11, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bacthavachalu et al. (U.S. Patent No.: US 8150889 B1) hereinafter Bacthavachalu, in view of Yagawa et al. (U.S. Patent No.: US 6792425 B2) hereinafter Yagawa, view of Edgar et al. (U.S. Publication No.: US 20130132357 A1) hereinafter Edgar, and further in view of Patel et al. (U.S. Publication No.: US 20210089532 A1) hereinafter Patel. As to claim 4: Bacthavachalu discloses: The non-transitory computer-readable data storage medium of claim 1, wherein continually running the search job comprises, a first time the search job is run and inserting the retrieved matching events in the search table [Column 6 Lines 25-30 teaches a first job can be designated to execute on a first node and a second job on a second node, and the scheduling service can monitor the job on each node. If one of the nodes fails, the scheduler can restart the job(s) for that node and/or move the job(s) to another node.]: Bacthavachalu, Yagawa, and Edgar discloses all of the limitations as set forth in claim 1 and some of claim 4 but does not appear to expressly disclose running the search job to retrieve the matching events stored in the events table that are newer than a specified newness threshold. Patel discloses: running the search job to retrieve the matching events stored in the events table that are newer than a specified newness threshold [Paragraph 0028 teaches the query feature store 214 has an extensible design to add more query engines, extract other pieces of information from the query 204 log, add new parsers for custom query formats, and add newer query workload features as they emerge. Paragraph 0035 teaches an example of such a tag may be a recurring job name, such as a periodic job that appears with a similar name each time. For such jobs, the query engine may load all query annotations 238 corresponding to that recurring job name in a single lookup. Note: Including newer queries from a query log (event table) is interpreted to must include a comparison between queries which establishes threshold to determine which of the queries is newer reads on the claims. The examiner further notes Bacthavachalu teaches the claimed event table as referenced to the cited temporary tables in child nodes.]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Bacthavachalu, Yagawa, and Edgar, by incorporating including newer queries from a query log (event table) is interpreted to must include a comparison between queries which establishes threshold to determine which of the queries is newer, as taught by Patel (see Paragraph 0028 and 0035), because the four applications are directed to query processing; incorporating including newer queries from a query log (event table) is interpreted to must include a comparison between queries which establishes threshold to determine which of the queries is newer solves technical solution involving database management (see Patel Paragraph 0014). As to claim 5: Bacthavachalu, Yagawa, Edgar, and Patel discloses all of the limitations as set forth in claim 1 and 4. Bacthavachalu also discloses: The non-transitory computer-readable data storage medium of claim 4, wherein continually running the search job further comprises, each of a plurality of times the search job is run other than the first time: retrieving the matching events stored in the events table that are newer than a newest matching event already stored in the search table; and inserting the retrieved matching events in the search table [Column 6 Lines 25-30 teaches a first job can be designated to execute on a first node and a second job on a second node, and the scheduling service can monitor the job on each node. If one of the nodes fails, the scheduler can restart the job(s) for that node and/or move the job(s) to another node. Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.] As to claim 6: Bacthavachalu, Yagawa, Edgar, and Patel discloses all of the limitations as set forth in claim 1, 4 and 5. Bacthavachalu also discloses: The non-transitory computer-readable data storage medium of claim 5, wherein inserting the retrieved matching events in the search table comprises: dividing the retrieved matching events into a plurality of chunks; and inserting the chunks of the retrieved matching events in parallel [Column 6 Lines 5-8 teach The framework can issue a received user query in parallel to each node with data for the query, and can ask each node to return the data or information corresponding to the query.] As to claim 10: Bacthavachalu discloses: The non-transitory computer-readable data storage medium of claim 1, wherein the new events are continually loaded into the events table in batches at a loading time interval, and each event stored in the events table has a loading time indicating when the event was loaded into the events table, and wherein continually running the search job comprises: inserting the retrieved matching events in the search table that are not already in the search table [Column 6 Lines 25-30 teaches a first job can be designated to execute on a first node and a second job on a second node, and the scheduling service can monitor the job on each node. If one of the nodes fails, the scheduler can restart the job(s) for that node and/or move the job(s) to another node. Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.] Bacthavachalu, Yagawa, and Edgar discloses all of the limitations as set forth in claim 1 and some of claim 10 but does not appear to expressly disclose setting a maximum loading time to the loading time of a newest matching event already stored in the search table; retrieving each matching event stored in the events table that the loading time of which is more recent than the maximum loading time minus the loading interval. Patel discloses: setting a maximum loading time to the loading time of a newest matching event already stored in the search table; retrieving each matching event stored in the events table that the loading time of which is more recent than the maximum loading time minus the loading interval [Paragraph 0028 teaches the query feature store 214 has an extensible design to add more query engines, extract other pieces of information from the query 204 log, add new parsers for custom query formats, and add newer query workload features as they emerge. Paragraph 0035 teaches an example of such a tag may be a recurring job name, such as a periodic job that appears with a similar name each time. For such jobs, the query engine may load all query annotations 238 corresponding to that recurring job name in a single lookup. Note: Utilizing the threshold for which queries are loaded in the query to associate a query with particular session interval is interprets to include a comparison between time stamp and the interval to determine a difference (minus the loading interval) reads on the claims. The examiner further notes Bacthavachalu teaches the claimed event table as referenced to the cited temporary tables in child nodes.]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Bacthavachalu, Yagawa, and Edgar, by incorporating utilizing the threshold for which queries are loaded in the query to associate a query with particular session interval is interprets to include a comparison between time stamp and the interval to determine a difference (minus the loading interval), as taught by Patel (see Paragraph 0028 and 0035), because the four applications are directed to query processing; incorporating Utilizing the threshold for which queries are loaded in the query to associate a query with particular session interval is interprets to include a comparison between time stamp and the interval to determine a difference (minus the loading interval) solves technical solution involving database management (see Patel Paragraph 0014). As to claim 11: Bacthavachalu discloses: The non-transitory computer-readable data storage medium of claim 1, wherein the search table is instantiated such that the search table is partitioned, and new partitions are added to the search table as the search job is continually run, in correspondence with the loading time, and wherein a plurality of partitions of the search table correspond to consecutive time periods, each partition storing the matching events for which the loading times are within a corresponding time period [Column 10 Lines 9-11 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table. Column 10 Lines 2-3 teach each node in this example processes a table from a local data store, here a "Logs" table. Column 10 Lines 5-6 teach the results of the instance query for each node, here a count ("CNT"), is stored in the temp table for that node. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes. Note: Creating (instantiating) a table when a query is received as part of a search job by copying queries loaded into query logs as logged entries at particular times (loading times), wherein logged entries reasonably includes consecutive times reads on the claims.] Bacthavachalu, Yagawa, and Edgar discloses all of the limitations as set forth in claim 1 and some of claim 10 but does not appear to expressly disclose wherein the new events are continually loaded into the events table in batches at a loading time interval, and each event stored in the events table has a loading time indicating when the event was loaded into the events table. Patel discloses: wherein the new events are continually loaded into the events table in batches at a loading time interval, and each event stored in the events table has a loading time indicating when the event was loaded into the events table [Paragraph 0028 teaches the query feature store 214 has an extensible design to add more query engines, extract other pieces of information from the query 204 log, add new parsers for custom query formats, and add newer query workload features as they emerge. Paragraph 0035 teaches an example of such a tag may be a recurring job name, such as a periodic job that appears with a similar name each time. For such jobs, the query engine may load all query annotations 238 corresponding to that recurring job name in a single lookup. Note: Utilizing the threshold for which queries are loaded in the query to associate a query with particular session interval is interprets to include a comparison between time stamp and the interval to determine a difference (minus the loading interval) reads on the claims. The examiner further notes Bacthavachalu teaches the claimed event table as referenced to the cited temporary tables in child nodes.]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Bacthavachalu, Yagawa, and Edgar, by incorporating utilizing the threshold for which queries are loaded in the query to associate a query with particular session interval is interprets to include a comparison between time stamp and the interval to determine a difference (minus the loading interval), as taught by Patel (see Paragraph 0028 and 0035), because the four applications are directed to query processing; incorporating Utilizing the threshold for which queries are loaded in the query to associate a query with particular session interval is interprets to include a comparison between time stamp and the interval to determine a difference (minus the loading interval) solves technical solution involving database management (see Patel Paragraph 0014). Claim 18 recites similar limitations as in claim 11. Therefore claim 18 is rejected for the same reasons as set forth above. See claim 11 for analysis. Claim(s) 7-9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bacthavachalu et al. (U.S. Patent No.: US 8150889 B1) hereinafter Bacthavachalu, in view of Yagawa et al. (U.S. Patent No.: US 6792425 B2) hereinafter Yagawa, in view of Edgar et al. (U.S. Publication No.: US 20130132357 A1) hereinafter Edgar and further in view of Bhattacharjee et al. (U.S. Patent No.: US 8751520 B1) hereinafter Bhattacharjee. As to claim 7: Bacthavachalu discloses: The non-transitory computer-readable data storage medium of claim 1, event already storing in the search table; and retrieving each matching event stored in the events table that the loading time of which is more recent than the maximum loading time [Column 6 Lines 25-30 teaches a first job can be designated to execute on a first node and a second job on a second node, and the scheduling service can monitor the job on each node. If one of the nodes fails, the scheduler can restart the job(s) for that node and/or move the job(s) to another node. Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.] Bacthavachalu, Yagawa, and Edgar discloses all of the limitations as set forth in claim 1 and some of claim 7 but does not appear to expressly disclose running the search job to retrieve the matching events stored in the events table that are newer than a specified newness threshold. Bhattacharjee discloses: wherein the new events are continually loaded into the events table in batches at a loading time interval, and each event stored in the events table has a loading time indicating when the event was loaded into the events table and wherein continually running the search job comprises, each of a plurality of times the search job is run: setting a maximum loading time to the loading time of a newest matching event [Column 13 Lines 13-18 teach a query session is defined as a sequence of search queries issued by a certain user for some specific information need, i.e., <User, Timestamp, Query>i. Session logs can be segmented by the timestamps, i.e., if the time interval between two adjacent queries are longer than a threshold.], and It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Bacthavachalu, Yagawa, and Edgar, by incorporating the use of storing query log by session wherein log entries include timestamps, as taught by Bhattacharjee (see Column 13 Lines 13-18), because the four applications are directed to query processing; incorporating the use of storing query log by session wherein log entries include timestamps improves the overall search experience (see Bhattacharjee Column 3 Lines 12-13). As to claim 8: Bacthavachalu, Yagawa, Edgar, and Bhattacharjee discloses all of the limitations as set forth in claim 1 and 7. Bacthavachalu also discloses: The non-transitory computer-readable data storage medium of claim 7, wherein continually running the search job further comprises, each of the plurality of times the search job is run: inserting the retrieved matching events in the search table [Column 6 Lines 25-30 teaches a first job can be designated to execute on a first node and a second job on a second node, and the scheduling service can monitor the job on each node. If one of the nodes fails, the scheduler can restart the job(s) for that node and/or move the job(s) to another node. Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.] As to claim 9: Bacthavachalu, Yagawa, Edgar, and Bhattacharjee discloses all of the limitations as set forth in claim 1, 7, and 8. Bacthavachalu also discloses: The non-transitory computer-readable data storage medium of claim 7, wherein continually running the search job further comprises, each of the plurality of times the search job is run: retrieving each matching event stored in the events table that the loading time of which is no older than the maximum loading time and is more recent than the maximum loading time minus the loading interval; and inserting the retrieved matching events in the search table that are not already in the search table [Column 8 Lines 12-20 teach a number of jobs are created based on the data to be loaded and the number of selected nodes 408. A source of data to be loaded also is specified 410. Once all the tables are created and metadata for the tables are distributed to the selected nodes, the data is loaded into a queue 412 and the data is dequeued one group at a time to the various nodes 414. The data is parsed and loaded into the system continuously and in parallel among the selected nodes until all the files are processed from the queue 416. Column 10 Lines 9-13 teaches when the parent node receives the job and query from the scheduling service, the parent node creates another temporary table according to the schema and copies in the data from the temporary tables from the child nodes.] Claim 19 recites similar limitations as in claim 9. Therefore claim 19 is rejected for the same reasons as set forth above. See claim 9 for analysis. Claim(s) 12 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bacthavachalu et al. (U.S. Patent No.: US 8150889 B1) hereinafter Bacthavachalu, in view of Yagawa et al. (U.S. Patent No.: US 6792425 B2) hereinafter Yagawa, in view of Edgar et al. (U.S. Publication No.: US 20130132357 A1) hereinafter Edgar, in view of Patel et al. (U.S. Publication No.: US 20210089532 A1) hereinafter Patel, and further of in view of Shu et al. (U.S. Publication No.: US 20170155586 A1) hereinafter Shu. As to claim 12: Bacthavachalu, Yagawa, Edgar, and Patel discloses all of the limitations as set forth in claim 1 and claim 11 but does not appear to expressly disclose wherein the processing further comprises: periodically determining whether the search table has a total number of partitions greater than a threshold; and in response to determining that the total number of partitions is greater than the threshold, deleting the partition including the matching events that are oldest. Shu discloses: The non-transitory computer-readable data storage medium of claim 11, wherein the processing further comprises: periodically determining whether the search table has a total number of partitions greater than a threshold; and in response to determining that the total number of partitions is greater than the threshold, deleting the partition including the matching events that are oldest [Paragraph 0005 teaches the lookup table may include a finite amount of entries, where each entry of the finite amount of entries has an associated age. Thus, if the finite number of entries has been reached, when a new entry is added, the control circuitry may delete an oldest entry in the lookup table, increase the age of each entry remaining in the lookup table, and assign the added entry an age that is youngest relative to each other entry of the lookup table.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Bacthavachalu, Yagawa, Edgar, and Patel, by incorporating control circuitry may delete an oldest entry in the lookup table, increase the age of each entry remaining in the lookup table, and assign the added entry an age that is youngest relative to each other entry of the lookup table, as taught by Shu (see Paragraph 0005), because the five applications are directed to query processing; incorporating control circuitry may delete an oldest entry in the lookup table, increase the age of each entry remaining in the lookup table, and assign the added entry an age that is youngest relative to each other entry of the lookup table improves query processing (see Shu Paragraph 0002). As to claim 13: Bacthavachalu, Yagawa, Edgar, Patel and Shu discloses all of the limitations as set forth in claim 1, 11, and 12. Shu also discloses: The non-transitory computer-readable data storage medium of claim 12, wherein the processing further comprises: periodically determining whether a total number of the matching events stored in the search table is greater than a different threshold; and in response to determining that the total number of the matching events stored in the search table is greater than the different threshold, deleting the partition including the matching events that are oldest Paragraph 0005 teaches the lookup table may include a finite amount of entries, where each entry of the finite amount of entries has an associated age. Thus, if the finite number of entries has been reached, when a new entry is added, the control circuitry may delete an oldest entry in the lookup table, increase the age of each entry remaining in the lookup table, and assign the added entry an age that is youngest relative to each other entry of the lookup table.] It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify the invention as taught by Bacthavachalu, Yagawa, Edgar, and Patel, by incorporating control circuitry may delete an oldest entry in the lookup table, increase the age of each entry remaining in the lookup table, and assign the added entry an age that is youngest relative to each other entry of the lookup table, as taught by Shu (see Paragraph 0005), because the five applications are directed to query processing; incorporating control circuitry may delete an oldest entry in the lookup table, increase the age of each entry remaining in the lookup table, and assign the added entry an age that is youngest relative to each other entry of the lookup table improves query processing (see Shu Paragraph 0002). Response to Arguments Applicant’s arguments with respect to the 103 rejection of claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EARL LEVI ELIAS whose telephone number is (571)272-9762. The examiner can normally be reached Monday - Friday (IFP). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EARL LEVI ELIAS/Examiner, Art Unit 2169 /SHERIEF BADAWI/Supervisory Patent Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Jan 22, 2024
Application Filed
Sep 11, 2025
Non-Final Rejection — §103
Jan 13, 2026
Response Filed
Jan 29, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591699
ESTABLISHING COMMUNICATION STREAM WITH DATABASE CONTROL AGENT OVER WHICH DATABASE COMMANDS ARE DISPATCHED FOR EXECUTION AGAINST DATABASE
2y 5m to grant Granted Mar 31, 2026
Patent 12572538
UNIFIED QUERY OPTIMIZATION FOR SCALE-OUT QUERY PROCESSING
2y 5m to grant Granted Mar 10, 2026
Patent 12547903
HUMAN-COMPUTER INTERACTION METHOD AND APPARATUS, STORAGE MEDIUM AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12511267
METHOD AND SYSTEM FOR CREATING AND REMOVING A TEMPORARY SUBUSER OF A COMPUTING DEVICE
2y 5m to grant Granted Dec 30, 2025
Patent 12493645
TAGGING TELECOMMUNICATION INTERACTIONS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
80%
With Interview (+23.5%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month