Prosecution Insights
Last updated: April 19, 2026
Application No. 18/399,005

ARTIFICIAL INTELLIGENCE SYSTEM WITH ITERATIVE TWO-PHASE ACTIVE LEARNING

Non-Final OA §101§103§DP
Filed
Dec 28, 2023
Examiner
JACOB, WILLIAM J
Art Unit
3696
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Amazon Technologies, Inc.
OA Round
3 (Non-Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
82%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
164 granted / 338 resolved
-3.5% vs TC avg
Strong +34% interview lift
Without
With
+34.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
48 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
39.9%
-0.1% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 21-40 are currently pending and are presented for examination on the merits. Claim Objections Claims 35-40 are objected to for use of the phrase “One or more . . . media” which is confusing (e.g., media is plural, such that one media is confusing). Please change to “A . . . medium,” which would incorporate the plural. It also raises indefinite concerns. Appropriate correction is required. Double Patenting ***This rejection is held in abeyance as requested by Applicant, until the claims are otherwise identified as allowable.*** Claim 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11,893772. Although the claims at issue are not identical, they are not patentably distinct from each other because the broader instant claims recite the same limitations contained in the narrower claims of the parent applications. For example, instant Claim 30 (depending from the broader independent Claim 21) maps to Claim 3 of the ‘772 Patent (regarding search query). As such, the instant claims are obvious in light of the parent claims. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In reLongi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Omum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(1)(1) - 706.02(1)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/ AIA / 26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An e-Terminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 21-40 are rejected under 35 U.S.C. § 101, because they recite non-patentable subject matter under MPEP § 2106, e.g., the 2019 PEG, October update. The claimed invention is directed to a judicial exception (e.g., an abstract idea, etc.) without practical application or significantly more. More particularly, when considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Broad categories of abstract ideas include fundamental economic practices, certain methods of organizing human activities, an idea itself, and mathematical relationships/formulas. See, generally Alice Corporation Pty. Ltd. v. CLS Bank International, et al., 573 U.S. __ (2014) (citing Mayo Collaborative Servs. v. Prometheus Labs., Inc.,132 S. Ct. 1289, 1294, 1297-98 (2012)); Federal Register notice titled 2014 Interim Guidance on Patent Subject Matter Eligibility (79 FR 74618), which is found at: http:// www. gpo.gov/fdsys/pkg/FR-2014-12-16/pdf/2014-29414.pdf; 2015 Update to the Interim Guidance; the 2019 Revised Patent Subject Matter Eligibility Guidance, Fed. Reg., Vol. 84, No. 4, January 7, 2019; and associated Office memoranda. Under the 2019 PEG, step 2a-prong 1, Claims 21-40 recite a judicial exception(s), including a method of organizing human activity (e.g. fundamental economic principle). More particularly, the entirety of the method steps is directed towards organizing and labeling records into multiple categories or buckets, and doing so, more recently using cloud-based language models, using natural language model to identify records to be labeled, and training a classification model based on a plurality of labeled records. This is a widely applied commercial practice performed by humans (model architects, businesses, data processing centers, etc.) using modern generic computing. That is to say, humans have long used multi-step iterative processes to group things to various degrees of gradation. Moreover, labeling these groupings or buckets have long been done (otherwise their classification would be meaningless). As such, the inventions include an abstract idea under the 2019 PEG, and Alice Corporation. Under step 2a-prong 2, the claims fail to recite a practical application of the exception, because the extraneous limitations (e.g., the structure –a cloud computing environment, a network accessible service, one or more programmatic interfaces, a natural language model, a classification model, etc.) merely add insignificant extra-solution activity to the judicial exception (MPEP 2106.05(g), generally link the use of the judicial exception to a particular technological environment or field of use (MPEP 2106.05(h)), and/or instruct an artisan to apply it (the method) across generic computing technology. Here, the use of artificial intelligence to replace human intelligence merely automates what was previously done, and fails to offer a practical application. A claim does not cease to be abstract for section 101 purposes simply because the claim confines the abstract idea to a particular technological environment in order to effectuate a real-world benefit. See Alice, 573 U.S. at 222; BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281, 1287 (Fed. Cir. 2018); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1353 (Fed. Cir. 2014). “[I]t is not enough, however, to merely improve a fundamental practice or abstract process by invoking a computer merely as a tool.” Customedia Techs., LLC v. Dish Network Corp., 951 F.3d 1359, 1364 (Fed. Cir. 2020) (citations omitted). More particularly, the claims fail to recite an improvement to the functioning of a computer or technology (under MPEP § 2106.05(a)), the use of a particular machine (under § 2106.05(b)), effect a transformation or reduction of a particular article (§ 2106.05(c)), or apply the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment (§ 2106.05(e)). Under part 2b, the additional elements offered by the dependent claims (e.g., an electronic representative, mobile electronic device, computer processor, etc.) either further delineate the abstract idea, recite insignificant extra-solution activity, or instruct the artisan to apply it (the abstract idea) across generic computing technology. The claims as a whole, do not amount to significantly more than the abstract idea itself. This is because no one claim effects an improvement to another technology or technical field, an improvement to the functioning of a computer itself, or move beyond a general link of the use of the abstract idea to a particular, albeit well-understood, routine and conventional technological environment. Viewing the limitations as an ordered combination does not add anything further than looking at the limitations individually. Under Alice, merely applying or executing the abstract idea on one or more generic computer system (e.g., a computer system comprising a generic database; a generic element (NIC) for providing website access, etc.; a generic element for receiving user input; and a generic display on the computer, in any of their forms) to carry out the abstract idea more efficiently fails to cure patent ineligibility. See, e.g., Content Extraction, 776 F.3d at 1347 (claims reciting a “scanner” are nevertheless directed to an abstract idea); Mortg. Grader, Inc. v. First Choice Loan Serv. Inc., 811 F.3d 1314, 1324–25 (Fed. Cir. 2016) (claims reciting an “interface,” “network,” and a “database” are nevertheless directed to an abstract idea). Courts have recognized the following computer functions to be well‐understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations, receiving, processing, and storing data, electronically scanning or extracting data from a physical document, electronic recordkeeping, automating mental tasks, and receiving or transmitting data over a network, e.g., using the Internet to gather data, MPEP 2106.05(d), wherein the italicized tasks are particularly germane to the instant invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 21-26, 28-33, and 35-40 are rejected under 35 U.S.C. § 103 as being unpatentable over US 10,417,350 to Mohamed et al. , in view of US 2019/0050443 to Rosu et al. With respect to Claims 21, 28, and 35, Mohamed teaches a system, comprising: one or more computing devices, including one or more non-transitory computer-accessible storage media storing program instructions, wherein the one or more computing devices include instructions (FIGS. 1, 10) that executes a computer-implemented method (FIGS. 1, and 10), comprising: identifying, at a network-accessible service of a cloud computing environment (col 5, ln 32-34), a data set which is to be used for training a classification model, wherein the data set comprises a plurality of unlabeled records (col 1, ln 17-35; col 2, ln 46-65); receiving an indication, via one or more programmatic interfaces of the network- accessible service (FIG. 7; col 5, ln 30-35), that at least some unlabeled records of the plurality of unlabeled records are to be selected for labeling based at least in part on output generated by a language model (col 2, ln 46-65; i.e., Mohamed teaches training “using data in a particular language to accept input;” labeled records including tokens expressed in a particular natural language); and training, at the network-accessible service, a classification model using a plurality of labeled records, wherein the plurality of labeled records includes at least a first record which was selected for labeling from the plurality of unlabeled records based at least in part on output generated by the language model (col 1, ln 17-35; col 2, ln 46-65). Mohammed fails to expressly teach, but Rosu teaches receiving . . . from the plurality of unlabeled records of the identified data set and based at least in part on output generated by a language model for labeling, [and] training . . . from the plurality of unlabeled records of the identified data set and based at least in part on the output generated by the language model for labeling. ([0021];[0030];[0040]) Rosu discusses the need for natural language models to rely on precise training, in text classification, including reducing interpretation error ([0002-03]). It would have been obvious to one of ordinary skill in the art to modify Mohammed to include the use of a separate natural language model as taught by Rosu. With respect to Claims 22, 29, and 36, Mohamed teaches wherein the indication is received via a parameter (“metric”) of a request to train the classification model. (col 2, ln 50-65) With respect to Claims 23, 30, and 37, Mohamed teaches wherein the output generated by the language model comprises a search query. (col 1, ln 30-45, determining values to be transmitted to an output layer) With respect to Claims 24, 31, and 38, Mohamed teaches receiving, via the one or more programmatic interfaces of the network-accessible service, a request to identify an annotator for labeling one or more unlabeled records; providing, by the network-accessible service via the one or more programmatic interfaces in response to the request, information pertaining to a particular annotator; and obtaining, at the network-accessible service, a label for the first record from the particular annotator. (FIG. 3, similarity analysis algorithm is equivalent to an annotator). With respect to Claims 25, 32, and 39, Mohamed teaches storing, at the network-accessible service, a first trained version of the classification model which was trained using the plurality of labeled records; and in response to a classification request for a second record, received at the network-accessible service via the one or more programmatic interfaces, providing an indication of a predicted class of the second record, wherein the predicted class is obtained from the first trained version. (Abstract; col 4, ln 2-16) With respect to Claims 26, 33, and 40, Mohamed teaches wherein the training of the classification model comprises a plurality of learning iterations (col 5, ln 25-35, adaptaions), the computer-implemented method further comprising: causing to be presented, by the network-accessible service via one or more graphical interfaces (col 5, ln 25-35), respective indications of one or more metrics pertaining to the plurality of learning iterations (Abstract; col 9, ln 4-10), wherein a particular metric of the one or more metrics indicates one or more of: (a) a number of labeled records as a function of completed learning iterations, or (b) a classification quality metric as a function of completed learning iterations (col 1, ln 27-35, metrics throughout). Claims 27 and 34 are rejected under § 103 as being unpatentable over Mohamed, in view of Rosu, and further in view of US 6,937,994 ot Lyengar. With respect to Claims 27, and 34, Mohamed fails to expressly teach, but Lyengar teaches wherein the plurality of labeled records includes a second record which was selected for labeling from the plurality of unlabeled records based at least in part on one or more of: (a) a query- by-committee algorithm or (b) an uncertainty sampling algorithm. (col 2, ln 63-67) Lyengar discusses the desire to provide a closed loop system methodology for selecting samples used for efficiently building models. (col 2, ln 1-5) It would have been obvious to one of ordinary skill in the art to modify Mohamed to include uncertainly sampling algorithms to more efficiently build its model. Response to remarks Applicant’s remarks submitted on 12/22/2025 have been fully considered but are not persuasive where objections/rejections are maintained. The Double Patenting rejection is held in abeyance until further notice (Applicant’s traversal as to form is noted –Initially, it is noted that it is not necessary that each the claim set be repeated that that each difference of each claim be discussed. Upon traversal, Applicant is asked to specify which current dependent claim is not obvious in light of the claim set of the parent application). As per § 101, the claims continue to include an abstract idea without sufficient extraneous or additional limitations to effect a practical application (prong 2) or significantly more under Part 2B. Training a model to classify data to emulate human intelligence, by first labeling at least a portion of the training data is a method of organizing human activity. (e.g., a search for “labeled SAME unlabeled SAME train$6 SAME model” yielded over 24,000 results in the U.S. patent database alone.). The invention continues to summarily recite a cloud-based method of training a classification model wherein at least some training records are selected for labeling based on the output of a natural language model, which fails to recite an innovative concept. (see prior art references of record, including US 2019/0050443 to Rosu et al, added via updated search herein). Applicant’s analogy to Example 23 is not persuasive because the instant case does not involve the improvement to technology (or similarly) relied upon in those case. Applicant is cautioned that the claims and their interpretation under BRI are broader than what is argued in the remarks. For example, whenever data is labeled, at least a portion of unlabeled data was selected for labeling. As per the prior art rejections, Applicant’s remarks overcome the previous § 102 rejection; however, Rosu has been added to teach a separate natural language model for labeling a constituent of unlabeled records based on an output (interpretative error). Mohammed teaches one of ordinary skill in the art, armed with the state of the art, the input data be selected from a particular language and that it is labeled (including tokens derived from the natural language). Mohammed expressly teaches using “classification models” trained using a plurality of labeled records, selected as described above. Mohammed expressly teaches the use of programmatic interface for applying the model to classify data. Mohammed teaches requiring classification training records “to be labeled.” This teaches at least a portion of unlabeled records being selected to be labeled as broadly recited. Lyengar expressly teaches “[u]ncertainty sampling methods iteratively identify instances in the data that need to be labeled based on some measure that suggests that the labels for these instances are uncertain” (col 2, ln 62-67). Please note that the applied reference(s) need not use the same terminology, or disclose the limitation verbatim, and also that the entirety of a prior art reference is to be applied to the respective claim(s), such that the pinpoint citations above are exemplary and provided for Applicant’s benefit; other locations within the applied reference(s) may further support the rejection. MPEP 2141.02(VI). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM J JACOB whose telephone number is (571)270-3082. The examiner can normally be reached on M-F 8:00-5:00, alternating Fri. off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Gart can be reached on 5712723955. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM J JACOB/Examiner, Art Unit 3696
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Feb 22, 2025
Non-Final Rejection — §101, §103, §DP
May 29, 2025
Interview Requested
Jun 04, 2025
Applicant Interview (Telephonic)
Jun 06, 2025
Examiner Interview Summary
Jun 24, 2025
Response Filed
Oct 04, 2025
Final Rejection — §101, §103, §DP
Dec 22, 2025
Response after Non-Final Action
Jan 07, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579533
DYNAMIC TRANSACTION ALLOCATION SYSTEM FOR IN-FLIGHT CONNECTIVITY
2y 5m to grant Granted Mar 17, 2026
Patent 12536546
ELECTRONICALLY SIGNING A DOCUMENT USING A PAYMENT CARD
2y 5m to grant Granted Jan 27, 2026
Patent 12530728
ARTIFICIAL INTELLIGENCE DRIVEN SYSTEM FOR ACCELERATED SOFTWARE APPLICATION CONTENT GENERATION
2y 5m to grant Granted Jan 20, 2026
Patent 12493907
METHOD AND SYSTEM FOR REPAIRING EXPLANATIONS FOR NON-LINEAR MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 09, 2025
Patent 12475513
Systems and Methods For Predicted Total Loss Based On Image Analysis and Point Of View Determinations
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
82%
With Interview (+34.0%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month