Prosecution Insights
Last updated: April 19, 2026
Application No. 19/097,717

OBTAINING INFERENCES TO PERFORM ACCESS REQUESTS AT A NON-RELATIONAL DATABASE SYSTEM

Non-Final OA §103§DP
Filed
Apr 01, 2025
Examiner
GORTAYO, DANGELINO N
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Amazon Technologies, Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
600 granted / 765 resolved
+23.4% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
12 currently pending
Career history
777
Total Applications
across all art units

Statute-Specific Performance

§101
9.6%
-30.4% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 765 resolved cases

Office Action

§103 §DP
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Amendment 3. In the amendments filed 4/2/2025, claims 1-20 have been cancelled. Claims 21-40 have been added. The currently pending claims are claims 21-40. Priority 4. Applicant’s claim for the benefit of a prior-filed US application 18/342,569, now US Patent 12,287,785, filed 6/27/2023, which claims benefit of a prior-filed US application 17/347,420, now US Patent 11,726,999, filed 6/14/2021, under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged. Information Disclosure Statement 5. Initialed and dated copies of Applicant’s IDS form 1449, filed 4/1/2025, is attached to the instant Office Action. Double Patenting 6. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321I or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. 7. Claims 21, 28, and 35 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 8, and 15 of U.S. Patent No. 12,287,785. Although the claims at issue are not identical, they are not patentably distinct from each other as shown below, since the claims, if allowed, would improperly extend the “right to exclude” already granted in the patent. The subject matter claimed in the instant application is fully disclosed in the patent and is covered by the patent since the patent and the application are claiming common subject matter, as follows: U.S. Patent No. 12,287,785 Instant Application Claim 1, A system, comprising: one or more processors; and a memory, that stores program instructions that, when executed by the at least one processor, cause the one or more processors to implement a non-relational database service, configured to implement a non-relational database service, configured to: receive a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; obtain the one or more existing data items specified according to the query language; cause the one or more existing items to be formatted for a machine learning system to train the machine learning model; cause the machine learning system to train the machine learning model using the formatted one or more data items; and associate the machine learning model for performing access requests to a data set hosted by the non-relational database service that includes the one or more existing items. Claim 21, A system, comprising: a plurality of computing devices, respectively implementing a processor and a memory, that implement a data warehouse service, the plurality of computing devices configured to: receive a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; obtain the one or more existing data items specified according to the query language; cause the one or more existing items to be formatted for a machine learning system to train the machine learning model; cause the machine learning system to train the machine learning model using the formatted one or more data items; and use the machine learning model for performing access requests to a data set hosted by the data warehouse service. Claim 8, A method, comprising: receiving, at a non-relational database service, a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; obtaining, by the non-relational database service, the one or more existing data items specified according to the query language; causing, by the non-relational database service, the one or more existing items to be formatted for a machine learning system to train the machine learning model; causing, by the non-relational database service, the machine learning system to train the machine learning model using the formatted one or more data items; and associating, by the non-relational database service, the machine learning model for performing access requests to a data set hosted by the non-relational database service that includes the one or more existing items. Claim 28, A method, comprising: receiving, at a data warehouse service, a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; obtaining, by the data warehouse service, the one or more existing data items specified according to the query language; causing, by the data warehouse service, the one or more existing items to be formatted for a machine learning system to train the machine learning model; causing, by the data warehouse service, the machine learning system to train the machine learning model using the formatted one or more data items; and using, by the data warehouse service, the machine learning model for performing access requests to a data set hosted by the data warehouse service. Claim 15, One or more non-transitory computer-readable storage media storing program instructions that, when executed on or across one or more computing devices, cause the one or more computing devices to implement a non-relational database service that implements: receiving a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; obtaining the one or more existing data items specified according to the query language; causing the one or more existing items to be formatted for a machine learning system to train the machine learning model; causing the machine learning system to train the machine learning model using the formatted one or more data items; and associating the machine learning model for performing access requests to a data set hosted by the non-relational database service that includes the one or more existing items. Claim 35, One or more non-transitory computer-readable storage media storing program instructions that, when executed on or across one or more computing devices, cause the one or more computing devices to implement a data warehouse service that implements: receiving a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; obtaining the one or more existing data items specified according to the query language; causing the one or more existing items to be formatted for a machine learning system to train the machine learning model; causing the machine learning system to train the machine learning model using the formatted one or more data items; and using the machine learning model for performing access requests to a data set hosted by the data warehouse service. 8. Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,726,999. Although the claims at issue are not identical, they are not patentably distinct from each other because the differences between the system recited in the claims of ‘999 and the system of the instant application would have been deemed obvious by one of ordinary skill in the art. For example, with respect to claim 1 of the instant application and claim 1 of ‘999, both sets of claims are directed towards a machine learning model generating an inference. Accordingly, while the language varies slightly as between the two claims, the language in each of these claims is describing a feature that is the same or a mere obvious variation of that recited in the other claim. The remaining claims of the instant application similarly encompass features that one skilled in the art would recognize as the same or obvious variants of features recited in claims 1-20 of '999. Claim Rejections - 35 USC § 103 9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 10. Claim(s) 21-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sirin et al. (US Publication 2019/0384863 A1, cited in IDS form filed 4/1/2025) in view of Kondiles et al. (US Publication 2021/0311943 A1, cited in IDS form filed 4/1/2025) As per claim 21, Sirin teaches A system, comprising: (see Abstract) a plurality of computing devices, respectively implementing a processor and a memory, that implement a data warehouse service, the plurality of computing devices configured to: (Figure 1, paragraph 0022, system for multi-source type interoperability) receive a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; (paragraph 0030, 0033, 0038, a data model is generated, the data model including prediction models such as machine learning models, prediction model interpreted as an inference to determine a targeted value, paragraph 0027, 0031, the data model generated in response to a received query in the form of SQL or other non-graph queries, interpreted as relational data model, or graph queries, interpreted as non-relational data model) obtain the one or more existing data items specified according to the query language; (paragraph 0028, 0060, 0062, query results are obtained) cause the one or more existing items to be formatted for a machine learning system to train the machine learning model; (paragraph 0026, 0027, 0033, data representation obtained from the queries are converted into another data source) cause the machine learning system to train the machine learning model using the formatted one or more data items; (paragraph 0031, 0042, 0136, data conversion utilized to train the prediction models) Sirin does not explicitly indicate use the machine learning model for performing access requests to a data set hosted by the data warehouse service. Kondiles teaches use the machine learning model for performing access requests to a data set hosted by the data warehouse service. (paragraphs 0126, 0175, 0179, 0183, 0214, machine learning models are applied to relational queries distributed across a plurality of non-relational data nodes storing queried data records) It would have been obvious for one of ordinary skill in the art at the time the invention was made to combine Sirin’s system of generating prediction models for multi-source data type interoperability with Kondiles’ ability to apply machine learning models to relational queries distributed across non-relational data nodes. This gives the user the ability to utilize machine learning models applied to queries when generating prediction models. The motivation for doing so would be to more efficiently access data in database systems (paragraph 0007). As per claim 22, Sirin teaches the machine learning model is deployed at a remote host accessible via a network endpoint and wherein performing the access requests comprises sending respective inference requests to the network endpoint for the remote host. (paragraph 0028, 0057, predicted requests) As per claim 23, Sirin teaches the machine learning model is locally deployed within the data warehouse service to perform access requests that use the machine learning model. (paragraph 0129, processing within same device) As per claim 24, Sirin teaches the plurality of computing devices are further configured to provide a network endpoint for the machine learning model within the data warehouse service responsible for handling access requests that use the machine learning model responsive to the request to create the machine learning model. (paragraph 0033, 0043, generate prediction model) As per claim 25, Sirin teaches to cause the one or more existing items to be formatted for the machine learning system to train the machine learning model, the plurality of computing devices are configured to perform, by the data warehouse service, the transformation of the one or more existing items. (paragraph 0089, 0120, query set transformation) As per claim 26, Sirin teaches the plurality of computing devices are further configured to perform an access request received at the data warehouse service using the machine learning model, wherein a result of performing the access request returns an inference generated by the machine learning model. (paragraph 0074, 0095, 0096, predicted query set) As per claim 27, Sirin teaches the plurality of computing devices are further configured to perform an access request received at the data warehouse service using the machine learning model, wherein a result of performing the access request inserts an inference generated by the machine learning model into the data set. (paragraph 0079, 0088, 090, join requested values) As per claim 28, Sirin teaches A method, comprising: (see Abstract) receiving, at a non-relational database service, a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; (paragraph 0030, 0033, 0038, a data model is generated, the data model including prediction models such as machine learning models, prediction model interpreted as an inference to determine a targeted value, paragraph 0027, 0031, the data model generated in response to a received query in the form of SQL or other non-graph queries, interpreted as relational data model, or graph queries, interpreted as non-relational data model) obtaining, by the data warehouse service, the one or more existing data items specified according to the query language; (paragraph 0028, 0060, 0062, query results are obtained) causing, by the data warehouse service, the one or more existing items to be formatted for a machine learning system to train the machine learning model; (paragraph 0026, 0027, 0033, data representation obtained from the queries are converted into another data source) causing, by the data warehouse service, the machine learning system to train the machine learning model using the formatted one or more data items; (paragraph 0031, 0042, 0136, data conversion utilized to train the prediction models) Sirin does not explicitly indicate using, by the data warehouse service, the machine learning model for performing access requests to a data set hosted by the data warehouse service. Kondiles teaches using, by the data warehouse service, the machine learning model for performing access requests to a data set hosted by the data warehouse service. (paragraphs 0126, 0175, 0179, 0183, 0214, machine learning models are applied to relational queries distributed across a plurality of non-relational data nodes storing queried data records) It would have been obvious for one of ordinary skill in the art at the time the invention was made to combine Sirin’s system of generating prediction models for multi-source data type interoperability with Kondiles’ ability to apply machine learning models to relational queries distributed across non-relational data nodes. This gives the user the ability to utilize machine learning models applied to queries when generating prediction models. The motivation for doing so would be to more efficiently access data in database systems (paragraph 0007). As per claim 29, Sirin teaches the machine learning model is deployed at a remote host accessible via a network endpoint and wherein using the machine learning model for performing the access requests comprises sending respective inference requests to the network endpoint for the remote host. (paragraph 0028, 0057, predicted requests) As per claim 30, Sirin teaches the machine learning model is locally deployed within the data warehouse service to perform access requests that use the machine learning model. (paragraph 0129, processing within same device) As per claim 31, Sirin teaches providing, by the non- data warehouse service, a network endpoint for the machine learning model to perform access requests that use the machine learning model responsive to the request to create the machine learning model. (paragraph 0033, 0043, generate prediction model) As per claim 32, Sirin teaches causing the one or more existing items to be formatted for the machine learning system to train the machine learning model comprises transforming, by the data warehouse service, the one or more existing items. (paragraph 0089, 0120, query set transformation) As per claim 33, Sirin teaches performing an access request received at the data warehouse service using the machine learning model, wherein a result of performing the access request returns an inference generated by the machine learning model. (paragraph 0074, 0095, 0096, predicted query set) As per claim 34, Sirin teaches performing an access request received at the data warehouse service using the machine learning model, wherein a result of performing the access request inserts an inference generated by the machine learning model into the data set. (paragraph 0079, 0088, 090, join requested values) As per claim 35, Sirin teaches One or more non-transitory computer-readable storage media storing program instructions that, when executed on or across one or more computing devices, cause the one or more computing devices to implement a data warehouse service that implements: (see Abstract) receiving a request to create a machine learning model that generates, as an inference, a targeted value for a data item using one or more existing data items specified according to a query language compatible with both a relational data model and a non-relational data model; (paragraph 0030, 0033, 0038, a data model is generated, the data model including prediction models such as machine learning models, prediction model interpreted as an inference to determine a targeted value, paragraph 0027, 0031, the data model generated in response to a received query in the form of SQL or other non-graph queries, interpreted as relational data model, or graph queries, interpreted as non-relational data model) obtaining the one or more existing data items specified according to the query language; (paragraph 0028, 0060, 0062, query results are obtained) causing the one or more existing items to be formatted for a machine learning system to train the machine learning model; (paragraph 0026, 0027, 0033, data representation obtained from the queries are converted into another data source) causing the machine learning system to train the machine learning model using the formatted one or more data items; (paragraph 0031, 0042, 0136, data conversion utilized to train the prediction models) Sirin does not explicitly indicate using the machine learning model for performing access requests to a data set hosted by the data warehouse service. Kondiles teaches using the machine learning model for performing access requests to a data set hosted by the data warehouse service. (paragraphs 0126, 0175, 0179, 0183, 0214, machine learning models are applied to relational queries distributed across a plurality of non-relational data nodes storing queried data records) It would have been obvious for one of ordinary skill in the art at the time the invention was made to combine Sirin’s system of generating prediction models for multi-source data type interoperability with Kondiles’ ability to apply machine learning models to relational queries distributed across non-relational data nodes. This gives the user the ability to utilize machine learning models applied to queries when generating prediction models. The motivation for doing so would be to more efficiently access data in database systems (paragraph 0007). As per claim 36, Sirin teaches the machine learning model is deployed at a remote host accessible via a network endpoint and wherein performing the access requests comprises sending respective inference requests to the network endpoint for the remote host. (paragraph 0028, 0057, predicted requests) As per claim 37, Sirin teaches the machine learning model is locally deployed within the data warehouse service to perform access requests that use the machine learning model. (paragraph 0129, processing within same device) As per claim 38, Sirin teaches storing further program instructions that when executed on or across the one or more computing devices, cause the data warehouse service to further implement providing a network endpoint for the machine learning model responsible for handling access requests that use the machine learning model responsive to the request to create the machine learning model. (paragraph 0033, 0043, generate prediction model) As per claim 39, Sirin teaches in causing the one or more existing items to be formatted for the machine learning system to train the machine learning model, the program instructions cause the one or more computing devices to implement transforming, by the data warehouse service, the one or more existing items. (paragraph 0089, 0120, query set transformation) As per claim 40, Sirin teaches storing further program instructions that when executed on or across the one or more computing devices, cause the data warehouse service to further implement performing an access request received at the data warehouse service using the machine learning model, wherein a result of performing the access request returns an inference generated by the machine learning model. (paragraph 0074, 0095, 0096, predicted query set) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Tang (US Patent 9,384,202 B1) Fischer (US Patent 10,642,863 B2) Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANGELINO N GORTAYO whose telephone number is (571)272-7204. The examiner can normally be reached Monday-Friday 7:00am - 3:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANGELINO N GORTAYO/Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Apr 01, 2025
Application Filed
Jan 30, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596713
SYSTEMS AND METHODS FOR REDUCING THE CARDINALITY OF METRICS QUERIES
2y 5m to grant Granted Apr 07, 2026
Patent 12579143
METHODS AND SYSTEMS FOR TRANSFORMING DISTRIBUTED DATABASE STRUCTURE FOR REDUCED COMPUTE LOAD
2y 5m to grant Granted Mar 17, 2026
Patent 12554786
Matching Search Queries To Application Content
2y 5m to grant Granted Feb 17, 2026
Patent 12547621
SOURCE MONITORING FOR DISCRETE WORKLOAD PROCESSING
2y 5m to grant Granted Feb 10, 2026
Patent 12541516
DATABASE SYSTEM OPERATOR FLOW OPTIMIZATION FOR PERFORMING FILTERING BASED ON NEW COLUMNS VALUES AND POWER UTILIZATION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+29.7%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 765 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month