Prosecution Insights
Last updated: April 19, 2026
Application No. 16/899,547

QUERY ENGINE IMPLEMENTING AUXILIARY COMMANDS VIA COMPUTERIZED TOOLS TO DEPLOY PREDICTIVE DATA MODELS IN-SITU IN A NETWORKED COMPUTING PLATFORM

Final Rejection §102§103
Filed
Jun 11, 2020
Examiner
WILLIS, AMANDA LYNN
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Servicenow Inc.
OA Round
8 (Final)
36%
Grant Probability
At Risk
9-10
OA Rounds
4y 8m
To Grant
62%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
123 granted / 345 resolved
-19.3% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
25 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
21.5%
-18.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 345 resolved cases

Office Action

§102 §103
DETAILED ACTION Receipt of Applicant’s Amendment, filed October 20, 2025 is acknowledged. Claims 1-3, 5-13, 15-20 were amended. Claims 4 and 14 were canceled. Claims 1-3, 5-13, 15-20 are pending in this office action. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 5, 8-13, 15, 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Leida [2013/0262443]. With regards to claim 1 Leida teaches A method comprising: receiving, from a data project interface (Leida, ¶132 “The dynamic visualization component 220 translates a user’s actions on a graphical representation of the query results on the GUI 100 into query objects that are executed by the application server 250”; ¶134), a query comprising a query language command as SPARQL query (Leida, ¶155 “users typically defines queries in the SPARQL language”; ¶156 “SELECT ?processID ?startTime ?endTime ?taskID ?taskType ?taskStartTime ?taskEndTime ?followingTaskID ?precedingTaskID”)and an auxiliary query command as the set of query atoms (Leida, ¶163 “Each query path is composed of a set of query atoms”), the auxiliary query command identifying a serialized predictive data model as the numeric representation of a specific triple data structure (¶148 “The triples are represented as data structures containing Uniform Resource Identifiers (URI) and value strings. However, string comparison and storage is inefficient which means that any queries over the triple data model will be quite slow. In the present embodiment it is therefore preferred to convert all triples into an efficient numeric representation which reduces not only the storage requirements but also makes querying and searching more efficient at the expense of a small increase in processing time when the triple is first received by the system.”; The term “serializing” has been read in light of Paragraph [0024] of the specification which recites that serializers may be configured to convert a predictive data model into a format that facilitates storage or data transmission), a dataset (¶158 “the dataset”), and one or more parameters corresponding to columns of the dataset as the attributes (¶142 “The concepts have also some attributes: "processStartTime". "processEndTime". "startTime" and "endTime".”; Please note the claim limitation “column” has been read in light of Paragraph [20] which recites attributes of a dataset as a column of data); deserializing the serialized predictive data model as executing the get command (Leida, ¶241 “The results returned from the atoms may not contain unique mappings, i.e. there can be more than one matching value for a corresponding value in a pair of terms. The system therefore makes use ofMultiMaps (a known structure which allows keys to be associated with multiple values), which map each key value to a list of values. A custom implementation of the MultiMap provides the necessary functionality, overriding the get and put methods to insert and retrieve more than one value for a give key. In addition, functionality is also provided for reversing the Map, e.g. to convert a subject predicate map to a predicate-subject map. This makes it extremely easy for query atoms down the execution path to easily manipulate the input/output data.”); based on deserializing the serialized predictive data model as getting the value for the given key (Id), generating a deserialized predictive data model as retrieving the value (Id); retrieving, from a graph-based data repository (Leida, ¶147 “They use this knowledge to convert the data into a more representative format, i.e. RDF-based graphs” ¶139-¶154; ¶161 “generates a query model object… It is composed of a set of query paths”), dataset (¶158 “the dataset”) data from columns of the dataset identified by the one or more parameters as the attributes (¶142 “The concepts have also some attributes: "processStartTime". "processEndTime". "startTime" and "endTime".”; Please note the claim limitation “column” has been read in light of Paragraph [20] which recites attributes of a dataset as a column of data); generating, using the retrieved dataset data as input to the deserialized predictive data model, resultant data as the output data (Leida, ¶241 “The results returned from the atoms may not contain unique mappings, i.e. there can be more than one matching value for a corresponding value in a pair of terms. The system therefore makes use ofMultiMaps (a known structure which allows keys to be associated with multiple values), which map each key value to a list of values. A custom implementation of the MultiMap provides the necessary functionality, overriding the get and put methods to insert and retrieve more than one value for a give key. In addition, functionality is also provided for reversing the Map, e.g. to convert a subject predicate map to a predicate-subject map. This makes it extremely easy for query atoms down the execution path to easily manipulate the input/output data.”); determining, by the deserialized predictive data model (Leida, ¶241), a degree of confidence as the sort position according to the specified sort variable (Leida, ¶249 “receives all the results and sorts them according to any specified sort variables”) associated with the resultant data (Leida, ¶249 “a simple list collection can also be used to store the results, with a final sort operation using the customer comparator”); integrating the resultant data and the degree of confidence into a final result set as the sorted final result (Id) based on the query language command (Leida, ¶155; ¶156); and presenting the final result set in the data project interface (Leida, ¶259 “Preferably the computer system has a monitor to provide a visual output display (for example in the design of the business process).”). With regards to claims 2 and 12 Leida further teaches wherein the dataset comprises one or more consolidated datasets (Leida, ¶278, claim 1 “combining the results of each query path to produce a result set that is the answer to said query”). With regards to claims 3 and 14 Leida further teaches wherein the graph-based data repository comprises one or more triple stores (Leida, ¶148 “The triples are represented as data structures containing Uniform Resource Identifiers (URI) and value strings. However, string comparison and storage is inefficient which means that any queries over the triple data model will be quite slow. In the present embodiment it is therefore preferred to convert all triples into an efficient numeric representation which reduces not only the storage requirements but also makes querying and searching more efficient at the expense of a small increase in processing time when the triple is first received by the system. Thus the present embodiment requires minimal pre-processing of data and no expensive import operations, thereby allowing the system to handle real-time updates more easily.”). With regards to claims 5 and 15 Leida further teaches identifying the serialized predictive data model based on an identifier (Leida, ¶164 See Query Pat 1 Atom 1 “?ProcessID ebitic:”) that references the serialized predictive data model (Leida, ¶144 “a set of triples: T<s,p,o> (subject, predicate and object)”; ¶152 “a triple can be uniquely represented by three integer values representing their sequence numbers”). With regards to claims 8 and 18 Leida further teaches wherein generating the resultant data comprises: applying a subset as the filter (Leida, 163 “a query atom can be associated with a filter on a free variable, which will be evaluated during the execution of the atom”) of the retrieved dataset data as the filter (Leida, 163 “a query atom can be associated with a filter on a free variable, which will be evaluated during the execution of the atom”) to one or more inputs of the deserialized as converting the subject predicate, object triple into the corresponding data points (Leida, ¶240 “These results are typically represented as key objects which contain the subject, predicate and object values. The data is passed on to dependent atoms, including the join atom, which uses this data to generate various maps and identify corresponding data points.”) predictive data model as the stored and labeled data model (Leida, ¶144 “The labelled oriented graph representing the ontology is the data model supporting this embodiment can be formalized as a set of triples: T<s,p,o> (subject, predicate, and object)”); and generating the resultant data (Leida, ¶249 “a simple list collection can also be used to store the results, with a final sort operation using the customer comparator”) as one or more outputs of the deserialized predictive data model (Leida, ¶241 “functionality is also provided for reversing the Map, e.g. to convert a subject-predicate map to a predicate-subject map. This makes it extremely easy for query atoms down the execution path to easily manipulate the input/output data”). With regards to claims 9 and 19 Leida further teaches performing a call responsive to the auxiliary query command; and based on performing the call responsive to the auxiliary query command, receiving the serialized predictive data model (¶193 “They are callable entities that provide a suitable call( ) method. The call() method takes the query atom and query filter definitions and converts them into data grid queries using the native representation for the data grid framework”; ¶213). With regards to claims 10 and 20 Leida further teaches wherein determining the degree of confidence associated with the resultant data comprises generating the degree of confidence as the sorted position in the list (Leida, ¶249 “a simple list collection can also be used to store the results, with a final sort operation using the customer comparator”) for each row as each entry (Id) in the resultant data as the simple list collection (Id). With regards to claim 11 Leida teaches An apparatus comprising: a memory (Leida, ¶261 “computer readable media”) including executable instructions (Leida, ¶260 “computer programs or as computer program products”); and a processor (Leida, ¶259 “a central processing unit (CPU)”), responsive to executing the instructions, is configured to: activate a query engine (Leida, ¶56 “the SPAROQL query engine”) configured to receive, from a data project interface (Leida, ¶132 “The dynamic visualization component 220 translates a user’s actions on a graphical representation of the query results on the GUI 100 into query objects that are executed by the application server 250”; ¶134), a query comprising a query language command as SPARQL query (Leida, ¶155 “users typically defines queries in the SPARQL language”; ¶156 “SELECT ?processID ?startTime ?endTime ?taskID ?taskType ?taskStartTime ?taskEndTime ?followingTaskID ?precedingTaskID”)and an auxiliary query command as the set of query atoms (Leida, ¶163 “Each query path is composed of a set of query atoms”), the auxiliary query command identifying a serialized predictive data model as the numeric representation of a specific triple data structure (¶148 “The triples are represented as data structures containing Uniform Resource Identifiers (URI) and value strings. However, string comparison and storage is inefficient which means that any queries over the triple data model will be quite slow. In the present embodiment it is therefore preferred to convert all triples into an efficient numeric representation which reduces not only the storage requirements but also makes querying and searching more efficient at the expense of a small increase in processing time when the triple is first received by the system.”; The term “serializing” has been read in light of Paragraph [0024] of the specification which recites that serializers may be configured to convert a predictive data model into a format that facilitates storage or data transmission), a dataset (¶158 “the dataset”), and one or more parameters corresponding to columns of the dataset as the attributes (¶142 “The concepts have also some attributes: "processStartTime". "processEndTime". "startTime" and "endTime".”; Please note the claim limitation “column” has been read in light of Paragraph [20] which recites attributes of a dataset as a column of data); deserialize the serialized predictive data model as executing the get command (Leida, ¶241 “The results returned from the atoms may not contain unique mappings, i.e. there can be more than one matching value for a corresponding value in a pair of terms. The system therefore makes use ofMultiMaps (a known structure which allows keys to be associated with multiple values), which map each key value to a list of values. A custom implementation of the MultiMap provides the necessary functionality, overriding the get and put methods to insert and retrieve more than one value for a give key. In addition, functionality is also provided for reversing the Map, e.g. to convert a subject predicate map to a predicate-subject map. This makes it extremely easy for query atoms down the execution path to easily manipulate the input/output data.”); based on deserializing the serialized predictive data model as getting the value for the given key (Id), generating a deserialized predictive data model as retrieving the value (Id); retrieve, from a graph-based data repository (Leida, ¶147 “They use this knowledge to convert the data into a more representative format, i.e. RDF-based graphs” ¶139-¶154; ¶161 “generates a query model object… It is composed of a set of query paths”), dataset (¶158 “the dataset”) data from columns of the dataset identified by the one or more parameters as the attributes (¶142 “The concepts have also some attributes: "processStartTime". "processEndTime". "startTime" and "endTime".”; Please note the claim limitation “column” has been read in light of Paragraph [20] which recites attributes of a dataset as a column of data); generate, using the retrieved dataset data as input to the deserialized predictive data model, resultant data as the output data (Leida, ¶241 “The results returned from the atoms may not contain unique mappings, i.e. there can be more than one matching value for a corresponding value in a pair of terms. The system therefore makes use ofMultiMaps (a known structure which allows keys to be associated with multiple values), which map each key value to a list of values. A custom implementation of the MultiMap provides the necessary functionality, overriding the get and put methods to insert and retrieve more than one value for a give key. In addition, functionality is also provided for reversing the Map, e.g. to convert a subject predicate map to a predicate-subject map. This makes it extremely easy for query atoms down the execution path to easily manipulate the input/output data.”); determine, by the deserialized predictive data model (Leida, ¶241), a degree of confidence as the sort position according to the specified sort variable (Leida, ¶249 “receives all the results and sorts them according to any specified sort variables”) associated with the resultant data (Leida, ¶249 “a simple list collection can also be used to store the results, with a final sort operation using the customer comparator”); integrate the resultant data and the degree of confidence into a final result set as the sorted final result (Id) based on the query language command (Leida, ¶155; ¶156); and present the final result set in the data project interface (Leida, ¶259 “Preferably the computer system has a monitor to provide a visual output display (for example in the design of the business process).”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-7 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Leida in view of Haung [2013/0318012]. With regards to claims 6 and 16 Leida further teaches wherein the deserialized predictive data model is [[ generating the resultant data comprises deploying [[(Leida, ¶142 “This basic and extremely generic process model can be extended by as many domain specific models as required”; ¶278, claim 1 “combining the results of each query path to produce a result set that is the answer to said query”). Leida does not explicitly teach that the domain specific model may be a machine learning model. Huang teaches machine learning model (Huang, ¶50 “In one embodiment, at least the information extraction, the machine learning or the deductive reasoning is conducted based on triples, in particular RDF-triples, "(s, p, o)", wherein s and o being entities and p being a predicate”). It would have been obvious to one of ordinary skill in the art to which said subject matter pertains at the time in which the invention was filed to have implemented the domain specific model taught by Leida as a Machine learning model as taught by Haung as it yields the predictable results of automating a manual process of generating the specific model in question. As Haugn details Machine learning can both support information extraction and deductive reasoning (Haugn, ¶6). The use of the ML model taught by Haugn enables the specific domain model to derive a prediction of various scenarios, relations, and dependencies (Haugn, ¶2). With regards to claims 7 and 17 the proposed combination further teaches wherein deploying the machine learning model (Haugn, ¶50) to receive the retrieved dataset data is based on an update to the retrieved dataset data (Leida, ¶128 “The system supports real-time updates of data which means that any user queries will return the most recent results at any point in time. Moreover, in case new data is available that satisfies a request from a client, the client can be automatically notified”). Response to Arguments Applicant's arguments filed October 20, 2025 have been fully considered but they are not persuasive. All the arguments regarding the newly added limitations are addressed in the above rejections. With regard to the prior art applicant argues that Leida does not teach determining, by the deserialized predictive data model, a degree of confidence associated with the resultant data. Applicant argues that the weight measure taught by Leida is a measure of computational cost, not a degree of confidence. In response, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Neither the claim language nor the specification places any limitation or indication of what constitutes a ‘degree of confidence’ or any recitation regarding how the degree of confidence is calculated. One of ordinary skill in the art is left to the plain meaning of the claim language itself. Leida recites arranging the final results in a sorted list. One of ordinary skill in the art would recognize that the sorted order of the list represents a degree of confidence of the results, e.g. the top result is the best result, the lowest ranked result being the worst result presented. It is suggested that the claims be amended to further define the scope of ‘degree of confidence’ as it is clear applicant has a specific intended meaning which is not required by the claim language. This may be done by defining what degree of confidence is or means, or by reciting how the degree of confidence is calculated. With regard to the prior art applicant argues that Leida does not teach the use of a predictive data model as the data model discussed in Leida is not a model for making predictions. In response, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Neither the instant specification nor the claim language places any limitation regarding what a ‘predictive data model’ is. There is no recitation of what it means to ‘predict’. One of ordinary skill in the art would recognize the results retrieved by the data model as a ‘prediction’. Based upon the arguments put forth, it is clear that applicant has an intended meaning and scope for the terms which is not required by the broadest reasonable interpretation of the claim language. It is suggested that the claims be amended to capture the scope that applicant intends. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMANDA WILLIS whose telephone number is (571)270-7691. The examiner can normally be reached Monday-Friday 8am-2pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMANDA L WILLIS/Primary Examiner, Art Unit 2156
Read full office action

Prosecution Timeline

Jun 11, 2020
Application Filed
Feb 03, 2022
Non-Final Rejection — §102, §103
Mar 14, 2022
Response Filed
Apr 14, 2022
Final Rejection — §102, §103
Jun 29, 2022
Response after Non-Final Action
Jul 06, 2022
Response after Non-Final Action
Aug 18, 2022
Request for Continued Examination
Aug 25, 2022
Response after Non-Final Action
Aug 31, 2022
Non-Final Rejection — §102, §103
Feb 01, 2023
Response Filed
Mar 17, 2023
Final Rejection — §102, §103
Sep 22, 2023
Request for Continued Examination
Oct 04, 2023
Response after Non-Final Action
Apr 11, 2024
Non-Final Rejection — §102, §103
Oct 16, 2024
Response Filed
Nov 08, 2024
Final Rejection — §102, §103
Jan 14, 2025
Response after Non-Final Action
Feb 14, 2025
Request for Continued Examination
Feb 18, 2025
Response after Non-Final Action
May 15, 2025
Non-Final Rejection — §102, §103
Oct 20, 2025
Response Filed
Dec 10, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602380
SUBSUMPTION OF VIEWS AND SUBQUERIES
2y 5m to grant Granted Apr 14, 2026
Patent 12585675
HYBRID POSITIONAL POSTING LISTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579206
AUTOMATIC ARTICLE ENRICHMENT BY SOCIAL MEDIA TRENDS
2y 5m to grant Granted Mar 17, 2026
Patent 12461960
SYSTEMS AND METHODS FOR MACHINE LEARNING-BASED CLASSIFICATION AND GOVERNANCE OF UNSTRUCTURED DATA USING CURATED VIRTUAL QUEUES
2y 5m to grant Granted Nov 04, 2025
Patent 12443613
REDUCING PROBABILISTIC FILTER QUERY LATENCY
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
36%
Grant Probability
62%
With Interview (+26.6%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 345 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month