Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The Action is responsive to the Application filed on 1/4/2025. Claim 1 is the only pending claim. Claim 1 is written in independent form.
Priority
Acknowledgment is made of a claim for priority as a continuation of Application 17/942075 (now US Patent 12,189,631), which is a continuation of PCT/US2022/028633, filed 05/10/2022, which claims foreign priority from IN202211008709, filed 2/18/2022, under 35 U.S.C. § 119(a)-(d) or (f), and is also acknowledged. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. PCT/US2022/028633, filed 05/10/2022, also claims priority from Provisional Application 63/302013 , filed 01/21/2022, Provisional Application 63/299710 , filed 01/14/2022, Provisional Application 63/282507 , filed 11/23/2021, and Provisional Application 63/187325 , filed 05/11/2021.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12,189,631. Every limitation in the present application claims is similar to, or an obvious variation of, a limitation recited in U.S. Patent No. 12,189,631.
Present Application Claims
Corresponding Claims in U.S. Patent No. 12,189,631
Claim 1
Claim 1
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claimed invention is directed to one or more abstract ideas without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than judicial exception. The eligibility analysis in support of these findings is provided below.
As per Claim 1,
STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed method (claim 1) is directed to one of the eligible categories of subject matter and therefore satisfies Step 1.
STEP 2A Prong One:The independent claim 1 recites the following limitations directed to an abstract idea:
Generating statistical information based on the partial query results;
The limitation recites a mathematical concept of executing a mathematical formula that takes as input the partial query results and outputs/generates statistical information.
Generating a probability distribution model based on the partial query results, wherein the probability distribution model is configured to generate an approximate response to the query;
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, or by a human using a pen and paper, by observing and evaluating the partial query results, and based on the observation and evaluation, making a judgement and/or opinion of a probability distribution model for generating an approximate response to the query.
Determining a statistical confidence of the probability distribution model based on the statistical information; and
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, or by a human using a pen and paper, by observing and evaluating the statistical information and the probability distribution model, and based on the observation and evaluation, making a judgement and/or opinion of a statistical confidence of the probability distribution model.
In response to the statistical confidence exceeding a determined threshold:
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, or by a human using a pen and paper, by observing and evaluating the statistical confidence and a determined threshold, and based on the observation and evaluation, making a judgement and/or opinion that the statistical confidence exceeds the determined threshold.
Generating, using the probability distribution model, the approximate response to the query, and
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, or by a human using a pen and paper, by observing and evaluating the query and the probability distribution model, and based on the observation and evaluation, making a judgement and/or opinion of an approximate response to the query.
STEP 2A Prong Two:Claim 1 recites that the method performed using “edge devices”, “a distributed database”, and “a query device”, which is a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The claim recites the following additional elements:
Receiving the query for data stored in the distributed database from a query device, wherein the query is a request for data stored at the edge device and for data stored at other edge devices;
The limitation recites an insignificant extra solution activity as retrieval of data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Executing the query to find partial query results comprising the data stored at the edge device;
The limitation recites an insignificant extra-solution activity as retrieval of data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Transmitting the approximate response to the query device.
The limitation recites an insignificant extra solution activity as retrieval of data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application.
STEP 2B:
The conclusions for the mere implementation using a computer are carried over and does not provide significantly more.
With respect to “Receiving the query for data stored in the distributed database from a query device, wherein the query is a request for data stored at the edge device and for data stored at other edge devices;” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
With respect to “Executing the query to find partial query results comprising the data stored at the edge device” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to “Transmitting the approximate response to the query device.” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
Looking at the claim as a whole does not change this conclusion and the claim is ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 is rejected under 35 U.S.C. 103 as being unpatentable over Nagaraju et al. (U.S. Pre-Grant Publication No. 2020/0012966, hereinafter referred to as Nagaraju) and further in view of Norris et al. (U.S. Pre-Grant Publication No. 2009/0019028, hereinafter referred to as Norris).
Regarding Claim 1:
Nagaraju teaches a method for processing a query for data stored in a distributed database, the method comprising:
Receiving, at an edge device, the query for data stored in the distributed database from a query device, wherein the query is a request for data stored at the edge device and for data stored at other edge devices;
Nagaraju teaches “a search head receives a search query from another device” (Para. [0080]) and “the search head analyzes the search query to determine what portion(s) of the query can be delegated to indexers and what portions of the query can be executed locally by the search head” (Para. [0080]). Therefore, Nagaraju teaches receiving a query request for data stored locally at the search head and delegated to other edge devices such as indexers.
Executing, by the edge device, the query to find partial query results including the data stored at the edge device;
Nagaraju teaches distributing the portions of the query to the appropriate indexers (Para. [0081]) and “the indexers may then either send the relevant events back to the search head, or use the events to determine a partial result and send the partial result back to the search head” (Para. [0082]).
Generating, by the edge device, statistical information based on the partial query results;
Nagaraju teaches “the edge devices 12 can learn from their edge data…to improve their respective local models 18, to make better predictions used to perform local actions.” (Para. [0034]).
Nagaraju further teaches “The network traffic can be analyzed to determine a number of network performance statistics. Monitoring network traffic may enable information to be gathered particular to the network performance associated with any of the client applications 42.” (Para. [0099]) thereby collected information including statistical information.
Generating the approximate response to the query, and
Nagaraju teaches:
“The results generated by the system 10 can be returned to a client using different techniques. For example, one technique streams results or relevant events back to a client in real-time as they are identified. Another technique waits to report the results to the client until a complete set of results (which may include a set of relevant events or a result based on relevant events) is ready to return to the client. Yet another technique streams interim results or relevant events back to the client in real-time until a complete set of results is ready, and then returns the complete set of results to the client. In another technique, certain results are stored as “search jobs,” and the client may retrieve the results by referring to the search jobs.” (Para. [0084])
Therefore, Nagaraju teaches generating an approximate response in the form of interim results or relevant events.
Transmitting, via the edge device, the approximate response to the query device.
Nagaraju teaches “the search head combines the partial results and/or events received from the indexers to produce a final result for the query” (Para. [0083]) and “the results generated by the system 10 can be returned to a client using different techniques” (Para. [0084]).
Nagaraju explicitly teaches all of the elements of the claimed invention as recited above except:
Generating, by the edge device, a probability distribution model based on the partial query results, wherein the probability distribution model is configured to generate an approximate response to the query;
Determining, by the edge device, a statistical confidence of the probability distribution model based on the statistical information; and
In response to the statistical confidence exceeding a determined threshold: generating, using the probability distribution model, a response;
However, in the related field of endeavor of querying data at multiple repositories, Norris teaches:
Generating, by the edge device, a probability distribution model based on the partial query results, wherein the probability distribution model is configured to generate an approximate response to the query;
Norris teaches “The search results may additionally include relevance or quality scores associated with the search results. These scores may be used by search component 410 to obtain a confidence score that measures how confident the search repository is in the search results. A confidence score for a particular set of search results may be calculated, for example, as the sum or average of the relevance scores of a certain number of the search results (e.g., as an average relevance score of the five most relevant search results). As another example, the confidence score for a particular set of search results may be calculated as the highest (i.e., most relevant) normalized relevance score in the set of search results.” (Para. [0038]) thereby teaching a probability distribution model of the relevance or quality scores associated with the partial search results from a particular repository.
Norris further teaches that calculating the confidence scores are then used to generate an approximate response by teaching “Based on the confidence scores, search component 410 may select an optimal or "best" one of the search query interpretations (act 540). For example, the search query interpretation with the highest confidence score may be selected as the best search query interpretation. In some implementations, the "best" interpretation may be a merged set of results from multiple interpretations.” (Para. [0046]).
Determining, by the edge device, a statistical confidence of the probability distribution model based on the statistical information; and
Norris teaches “The search results may additionally include relevance or quality scores associated with the search results. These scores may be used by search component 410 to obtain a confidence score that measures how confident the search repository is in the search results. A confidence score for a particular set of search results may be calculated, for example, as the sum or average of the relevance scores of a certain number of the search results (e.g., as an average relevance score of the five most relevant search results). As another example, the confidence score for a particular set of search results may be calculated as the highest (i.e., most relevant) normalized relevance score in the set of search results.” (Para. [0038]) and “Search component 410 may use these relevance scores to generate a value, called a confidence score herein, that indicates a level of confidence of the set of search results for the query.” (Para. [0044]).
In response to the statistical confidence exceeding a determined threshold: generating, using the probability distribution model, a response;
Norris teaches “If a particular search interpretation has a high enough confidence score, such as one above a predetermined threshold, the exploration of the search query may be stopped relatively quickly by search component 410.” (Para.[0065]) and “Based on the confidence scores, search component 410 may select an optimal or "best" one of the search query interpretations (act 540). For example, the search query interpretation with the highest confidence score may be selected as the best search query interpretation. In some implementations, the "best" interpretation may be a merged set of results from multiple interpretations.” (Para. [0046]).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Norris and Nagaraju at the time that the claimed invention was effectively filed, to have modified the systems and methods for indexing and searching time-stamped events at edge devices, as taught by Nagaraju, with the query partitioning approach, as taught by Norris.
One would have been motivated to make such combination because Norris teaches “By reducing the number of possible search query partitions in this manner, the number of searches submitted to search repositories 430 can be reduced relative to a brute force approach to generating query partitions. Advantageously, this can reduce the processing load for local search engine 225.” (Para. [0064]) which would be relevant to the system and methods taught by Nagaraju because Nagaraju teaches a plurality of data stores on the data intake and query system 24 (Figure 3) which is local on each edge device (Figure 1).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Mishra et al. (U.S. Pre-Grant Publication No. 2019/0347358, hereinafter referred to as Mishra) teaches generating a relevance score of sensor candidates may “further include calculating a weighted mean of numerical representations of the following characteristics of a sensor entity: i) frequency of usage; ii) duration of usage; iii) last used time; iv) novelty factor; and (v) brand of the entity” (Paras. [0041]-[0042]). Mishra further teaches “a binary classifier may be employed to dynamically select a relevance score threshold” where the binary classifier “may receive as inputs a candidate, its relevance score…and records of previous user feedback regarding the candidate, e.g., positive or negative feedback responsive to previous query results served in which the candidate entity was utilized as part of the query string” (Para. [0047]).Mishra further teaches “explicit user selection of a search result 314a incorporating enhanced sensor data or other positive feedback may be fed back to module 322 via feedback signal 314b to strengthen the association between the chosen sensor candidate and the query text.” (Para. [0036]).
Song et al. (U.S. Pre-Grant Publication No. 2022/0044117, hereinafter referred to as Song) teaches training a neural network including collecting model exemplar information from edge devices (Abstract), the distributed model on the edge devices (Para. [0023]) and where the model may be implemented as neural networks (Para. [0021])
Calix L. No Surname (U.S. Pre-Grant Publication No. 2020/0372104) teaches receiving a request to perform one or more book formatting operations from a user device. Further, the communication device may be configured for transmitting a plurality of queries to the user device based on the request. Further, the communication device may be configured for receiving a plurality of responses corresponding to the plurality of queries from the user device. Further, the system may include a processing device configured for analyzing the plurality of responses to identify one or more templates stored in a template database. Further, the processing device may be configured for generating a distributor ready digital book based on the one or more templates. Further, the system may include a storage device configured for retrieving the one or more templates from the template database based on the analyzing.
Thompson et al. (U.S. Pre-Grant Publication No. 2002/0073086) teaches a query originator injects queries of network devices into the network at a query node using query messages. The network transports the query messages to the network devices, or to network nodes at which queries about the network devices can be answered. Query responses from the network devices or network nodes are directed through the network to a collection node or nodes. As an internal network node receives multiple query responses from network devices, the internal network node might aggregate, as needed, the multiple query responses into an aggregated query response that preferably occupies less bandwidth than the aggregated multiple query responses. Where the result desired at the collection node is a computed function of the multiple query responses, the computed function can be performed at each internal network node on the multiple query responses received at that node, thus distributing the computation needed to form a collected response to the query. Queries might request real-time or non-real-time responses and queries might request one response, periodic responses or one response for each occurrence of an event. The internal network nodes might store lookup tables, or access a centralized lookup table, that specify details of queries, thus obviating the need for each query message to carry all of the necessary details of the query.
Aggour et al. (U.S. Pre-Grant Publication No. 2020/0272664) teaches a system to generate and run federated queries against a plurality of data stores storing disparate data types, the system including a user interface receiving query details from a data consumer, a metadata knowledge graph containing metadata for links and relationships of the data stores, a knowledge-driven querying layer accessing the graph and selecting predefined constrainable queries from a nodegroup store and applying the metadata links/relationships to the predefined constrainable queries to assemble subqueries, a query and analysis platform providing the subqueries to some of the data stores for execution, a scalable analytic execution layer receiving and aggregating search results from the data stores into a merged search result and/or obtaining analytic results by applying machine learning and artificial intelligence techniques to the distributed data, the user interface presenting visualizations generated from the merged search results, and/or the analytic results. A system and a non-transitory computer-readable medium are also disclosed.
Bhattacharjee et al. (U.S. Pre-Grant Publication No. 2020/0050612) teaches systems and methods are described for distributed processing a query in a first query language utilizing a query execution engine intended for single-device execution. While distributed processing provides numerous benefits over single-device processing, distributed query execution engines can be significantly more difficult to develop that single-device engines. Embodiments of this disclosure enable the use of a single-device engine to support distributed processing, by dividing a query into multiple stages, each of which can be executed by multiple, concurrent executions of a single-device engine. Between stages, data can be shuffled between executions of the engine, such that individual executions of the engine are provided with a complete set of records needed to implement an individual stage. Because single-device engines can be significantly less difficult to develop, use of the techniques described herein can enable a distributed system to rapidly support multiple query languages.
Renner et al. (U.S. Pre-Grant Publication No. 2021/0342339) teaches a system, method, and computer-readable medium are disclosed for constructing a distribution of interrelated event features. In various embodiments constructing the distribution includes: receiving a stream of events, the stream of events comprising a plurality of events; generating a query relating to the plurality of events, the query comprising condition information, the condition information defining a subset of query relevant events; processing the query relating to the plurality of events, extracting features from the plurality of events based upon the query; constructing a distribution of the features from the plurality of events based upon the query; and, analyzing the distribution of the features from the plurality of events based upon the query.
Halder et al. (U.S. Pre-Grant Publication No. 2020/0150687) teaches autonomous machines (AMs) and more particularly to techniques for intelligently planning, managing and performing various tasks using AMs. A control system (referred to as a fleet management system or FMS) is disclosed for managing a set of resources at a site, which may include AMs. The FMS is configured to control and manage the AMs at the site such that tasks are performed autonomously by the AMs. An AM may directly communicate with another AM located on the site to complete a task without requiring to be in constant communication with the FMS during the performance of the task. The FMS is configured to use various optimization techniques to allocate resources (e.g., AMs) for performing tasks at the site. The resource allocation is performed so as to maximize the use of available AMs while ensuring that the tasks get performed in a timely manner.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT F MAY whose telephone number is (571)272-3195. The examiner can normally be reached Monday-Friday 9:30am to 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT F MAY/Examiner, Art Unit 2154 1/8/2026
/SYED H HASAN/Primary Examiner, Art Unit 2154