Prosecution Insights
Last updated: April 19, 2026
Application No. 17/097,926

INFERENCE SYSTEM, INFERENCE DEVICE, AND INFERENCE METHOD

Non-Final OA §101
Filed
Nov 13, 2020
Examiner
VAUGHN, RYAN C
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Axell Corporation
OA Round
5 (Non-Final)
62%
Grant Probability
Moderate
5-6
OA Rounds
3y 9m
To Grant
81%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
145 granted / 235 resolved
+6.7% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
45 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
23.9%
-16.1% vs TC avg
§103
40.1%
+0.1% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 16-24 are presented for examination. Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 19, 2025 has been entered. Response to Amendment Applicant’s amendment has obviated the remaining specification objections. Therefore, those objections are withdrawn. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Objections Claims 16, 19, and 22 are objected to because of the following informalities: “the inference processing, at least one of the preprocessing, and the postprocessing” should be “the inference processing and at least one of the preprocessing and the postprocessing”. The dependent claims are objected to for dependency on at least one objected-to base claim. Appropriate correction is required. Claim Rejections - 35 USC § 101 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 16-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 16 Step 1: The claim recites a system comprising devices that comprise processors; therefore, it is directed to the statutory category of machines. Step 2A Prong 1: The claim recites, inter alia: [C]reating the … model including a bytecode …, the bytecode comprising at least one of a preprocessing that converts input data into a format suitable for inference processing, and a postprocessing that converts output data from the inference processing into a format suitable for subsequent application processing according to training settings in training when the learned model is created: This limitation could encompass writing out the code for the model, including the preprocessing and/or postprocessing code, using a pen and paper. [O]perating the … model … to execute the inference processing and at least one of the preprocessing and the postprocessing, thereby enabling the learned model to be portable across the first and second operating environments: The model created could be executed mentally to perform inference processing, preprocessing, and/or postprocessing. The thereby clause merely states an intended result of operating the model and does not change the analysis. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the method is performed by “a seller device used by the seller of the learned model, a server of a model store selling the learned model, a provider device used by the provider of an inference runtime; and a first user device configured to operate in a first operating environment and a second user device configured to operate in a second operating environment, the second operating environment being different from the first operating environment, wherein the inference runtime comprises a virtual machine, the seller device comprising: a first processor which executes a process”; that “the user device compris[es]: a second processor which executes a process”; that certain processes are “executed on the virtual machine”, “on the user device”, on “a storage unit”, and/or on a “downloaded learned model under the inference runtime”; and that “the inference processing[ and] at least one of the preprocessing[] and the postprocessing are performed independently of the seller device and the provider device by the learned model stored in the storage unit and the inference runtime executed on the respective user device”. However, these are all mere instructions to apply the judicial exception using a generic computer programmed with generic classes of computer algorithms. MPEP § 2106.05(f). The claim further recites “uploading the created learned model to the server”, “installing the inference runtime on the storage unit such that the inference runtime can be executed on the respective user device”, and “downloading the selected learned model from the server to cause the storage unit to store the selected learned model”. All three of these limitations, however, recite the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). Step 2B: The claim does not contain significantly more than the judicial exception. The above-listed additional elements of the claim are mere instructions to apply the exception using a generic computer programmed with a generic class of computer algorithm for the same reasons as given above. The uploading, installing, and downloading limitations, in addition to being insignificant extra-solution activity, also recite the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). As an ordered whole, the claim is directed to a mentally performable process of creating and operating a model to perform inference. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 17 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia, “creating a new … model including a bytecode that executes at least one of a preprocessing and a postprocessing …, the preprocessing and the postprocessing corresponding to the new inference processing after the update”. This limitation could encompass the mental creation of a model to perform preprocessing and/or postprocessing corresponding to new inference processing. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the method is performed with a “learned model” and that the processing is performed “according to training settings in retraining when the learned model is updated”. These are mere instructions to apply the exception using a generic computer programmed with generic classes of computer algorithm. MPEP § 2106.05(f). The claim further recites “uploading the newly created learned model to the server”, which recites the insignificant extra-solution activity of mere data gathering and output. Step 2B: The claim does not contain significantly more than the judicial exception. The above-listed additional elements of the claim are mere instructions to apply the exception using a generic computer programmed with a generic class of computer algorithm for the same reasons as given above. The uploading limitation, in addition to being insignificant extra-solution activity, also recites the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Claim 18 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia, “replacing another … model with the new … model”. This limitation could encompass the mental replacement of models. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the model being replaced is a “learned model operating under the inference runtime”. This is a mere instruction to apply the exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f). The claim further recites that the “new selected learned model is downloaded from the server.” This limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). Step 2B: The claim does not contain significantly more than the judicial exception. The above-listed additional elements of the claim are mere instructions to apply the exception using a generic computer programmed with a generic class of computer algorithm for the same reasons as given above. The downloading limitation, in addition to being insignificant extra-solution activity, also recites the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Claim 19 Step 1: The claim recites a system comprising devices that comprise processors; therefore, it is directed to the statutory category of machines. Step 2A Prong 1: The claim recites, inter alia: [C]onverting a program code … according to settings … when the … model is created, the program code including at least one of a preprocessing that converts input data into a format suitable for inference processing and a postprocessing that converts output data from the inference processing into a format suitable for subsequent applications: This limitation could encompass the conversion of the code including preprocessing and/or postprocessing code using a pen and paper. [C]reating the … model … corresponding to the preprocessing and the postprocessing: The model could be created mentally. [O]perating the … model … to execute the inference processing and at least one of the preprocessing and postprocessing, thereby enabling the learned model to be portable across the first and second operating environments: The model created could be executed mentally to perform inference processing, preprocessing, and/or postprocessing. The thereby clause merely states an intended result of operating the model and does not change the analysis. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the system comprises “a seller device used by the seller of the learned model, a server of a model store selling the learned model, a provider device used by the provider of an inference runtime; and a first user device configured to operate in a first operating environment and a second user device configured to operate in a second operating environment, the second operating environment being different from the first operating environment, the seller device comprising: a first processor which executes a process”; that the conversion is a conversion “into a neural network layer according to settings in training when the learned model is created”; that the “learned model includ[es] at least one layer”; that “each of the first user device and the second user device compris[es]: a storage unit and a second processor which executes a process”; that the “downloaded learned model [is operated] under the inference runtime”; and that “the inference processing[ and] at least one of the preprocessing[] and the postprocessing are performed independently of the seller device and the provider device by the learned model stored in the storage unit and the inference runtime executed on the respective user device”. However, these limitations amount to mere instructions to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f). The claim further recites “uploading the created learned model to the server”, “installing the inference runtime on the storage unit such that the inference runtime can be executed on the respective user device”, and “downloading the selected learned model from the server to cause the storage unit to store the selected learned model”. However, these limitations all recite the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). Step 2B: The claim does not contain significantly more than the judicial exception. The above-listed additional elements of the claim are mere instructions to apply the exception using a generic computer programmed with a generic class of computer algorithm for the same reasons as given above. The uploading, installing, and downloading limitations, in addition to being insignificant extra-solution activity, also recite the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). As an ordered whole, the claim is directed to a mentally performable process of creating and operating a model to perform inference. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 20 Step 1: A machine, as above. Step 2A Prong 1, Step 2A Prong 2, Step 2B: The additional limitations of this claim are analyzed the same way as in the rejection of claim 17. Claim 21 Step 1: A machine, as above. Step 2A Prong 1, Step 2A Prong 2, Step 2B: The additional limitations of this claim are analyzed the same way as in the rejection of claim 18. Claim 22 Step 1: The claim recites a system comprising devices that comprise processors; therefore, it is directed to the statutory category of machines. Step 2A Prong 1: The claim recites, inter alia: [C]reating the … model: This limitation could encompass the mental creation of the model. [C]reating a bytecode …, the bytecode comprising at least one of a preprocessing that converts input data into a format suitable for inference processing, and a postprocessing that converts output data from the inference processing into a format suitable for subsequent application processing: This limitation could encompass the creation of the preprocessing and/or postprocessing bytecode by writing it down using a pen and paper. [A]ssociating the bytecode with the … model: This limitation could encompass the mental association of the bytecode with the model. [O]perating the … model and the bytecode … to execute the inference processing and at least one of the preprocessing and postprocessing, thereby enabling the learned model to be portable across the first and second operating environments: The model created could be executed mentally to perform inference processing, preprocessing, and/or postprocessing. The thereby clause merely states an intended result of operating the model and does not change the analysis. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the method is performed by “a seller device used by the seller of the learned model, a server of a model store selling the learned model, a provider device used by the provider of an inference runtime; and a first user device configured to operate in a first operating environment and a second user device configured to operate in a second operating environment, the second operating environment being different from the first operating environment, wherein the inference runtime comprises a virtual machine, the seller device comprising: a first processor which executes a process”; that the “bytecode [is] executed on the virtual machine”; that the model is a “learned model”; that the conversion occurs “according to settings in training when the learned model is created”; that “each of the first user device and the second user device compris[es]: a storage unit and a second processor which executes a process”; and that “downloaded learned model and the bytecode [operate] under the inference runtime”; and that “the inference processing[ and] at least one of the preprocessing[] and the postprocessing are performed independently of the seller device and the provider device by the learned model stored in the storage unit and the inference runtime executed on the respective user device”. However, these are mere instructions to apply the judicial exception using a generic computer programmed with generic classes of computer algorithms. MPEP § 2106.05(f). The claim further recites “uploading [the] associated … learned model and the bytecode to the server”; “installing the inference runtime on the storage unit such that the inference runtime can be executed on the respective user device”; and “downloading the selected learned model and the bytecode associated with the selected learned model from the server”. However, these limitations amount to the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). Step 2B: The claim does not contain significantly more than the judicial exception. The above-listed additional elements of the claim are mere instructions to apply the exception using a generic computer programmed with a generic class of computer algorithm for the same reasons as given above. The uploading, installing, and downloading limitations, in addition to being insignificant extra-solution activity, also recite the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). As an ordered whole, the claim is directed to a mentally performable process of creating and operating a model to perform inference. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible. Claim 23 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia, “creating a new … model …, creating a new bytecode that executes at least one of a preprocessing and a postprocessing …, the preprocessing and the postprocessing corresponding to the new inference processing after the update[; and] associating the new bytecode with the new … model”. These limitations could encompass mentally creating the model and the bytecode for executing the preprocessing and/or postprocessing and the mental association of the bytecode with the model. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the model is a “learned model [that] is updated by retraining” and that the preprocessing and/or postprocessing occur “according to training settings in retraining”. These limitations amount to mere instructions to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f). The claim further recites “uploading the associated new learned model and the new bytecode to the server”. This limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). Step 2B: The claim does not contain significantly more than the judicial exception. The above-listed additional elements of the claim are mere instructions to apply the exception using a generic computer programmed with a generic class of computer algorithm for the same reasons as given above. The uploading limitation, in addition to being insignificant extra-solution activity, also recites the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Claim 24 Step 1: A machine, as above. Step 2A Prong 1: The claim recites, inter alia, “replacing another … model and another bytecode …with a new … model and a new bytecode …; and operating the new … model and the new bytecode”. This limitation could encompass the mental replacement and operation of the model and bytecode. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the model is a “learned model … [operating] under the inference runtime”. However, this is a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f). The claim further recites that “the new selected learned model and the new bytecode are downloaded from the server”. However, this limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g). Step 2B: The claim does not contain significantly more than the judicial exception. The above-listed additional elements of the claim are mere instructions to apply the exception using a generic computer programmed with a generic class of computer algorithm for the same reasons as given above. The downloading limitation, in addition to being insignificant extra-solution activity, also recites the well-understood, routine, and conventional activity of receiving and transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Response to Arguments Applicant's arguments filed November 19, 2025 (“Remarks”) have been fully considered but they are not persuasive. Applicant argues that the claims as amended are now eligible under 35 USC § 101 because they now explicitly recite that the AI models are portable and interoperable across multiple user devices operating in different environments, thereby integrating any judicial exception recited into a practical application. Applicant analogizes the instant application to BASCOM Global Internet v. AT&T Mobility, Inc., 827 F.3d 1341, 119 USPQ2d 1236 (Fed Cir. 2016), suggesting that the instant claims, like those at issue in BASCOM, are directed to a non-conventional technical implementation of AI models that solve the technical problem of ensuring portability of AI models across different environments. Remarks at 10-14. While, like the BASCOM claims, the instant claims present a “close call,” MPEP § 2106.06(b), the newly added elements of the claims do not confer eligibility on the claims in a way that the previous version of the claims did not. For instance, Applicant’s recitation that the operation of the model under an inference runtime “enabl[es] the learned model to be portable across the first and second operating environments,” as currently written, merely recites an intended result of said operation and is not positively recited. That is, the claims do not require actually moving the model across environments. Moreover, the new recitation of two devices each of which operates in a different operating environment merely instructs the reader to apply the judicial exception using two separate generically recited computers. MPEP § 2106.05(f). The claims as a whole are directed to the abstract idea of converting code and executing models to produce inference results, and the additional elements are directed either to performing that abstract idea in a particular technological environment, MPEP § 2106.05(h); applying the abstract idea on a computer, MPEP § 2106.05(f); or data gathering and output activities that constitute insignificant extra-solution activity that is well-understood, routine, and conventional, MPEP §§ 2106.05(d)(II), (g). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN C VAUGHN whose telephone number is (571)272-4849. The examiner can normally be reached M-R 7:00a-5:00p ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN C VAUGHN/Primary Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Nov 13, 2020
Application Filed
Jan 30, 2024
Non-Final Rejection — §101
Jun 03, 2024
Response Filed
Aug 29, 2024
Final Rejection — §101
Feb 03, 2025
Request for Continued Examination
Feb 08, 2025
Response after Non-Final Action
Apr 21, 2025
Non-Final Rejection — §101
Jul 24, 2025
Response Filed
Aug 20, 2025
Final Rejection — §101
Oct 10, 2025
Interview Requested
Oct 16, 2025
Applicant Interview (Telephonic)
Oct 16, 2025
Examiner Interview Summary
Nov 19, 2025
Request for Continued Examination
Nov 29, 2025
Response after Non-Final Action
Jan 12, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602448
PROGRESSIVE NEURAL ORDINARY DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12602610
CLASSIFICATION BASED ON IMBALANCED DATASET
2y 5m to grant Granted Apr 14, 2026
Patent 12561583
Systems and Methods for Machine Learning in Hyperbolic Space
2y 5m to grant Granted Feb 24, 2026
Patent 12541703
MULTITASKING SCHEME FOR QUANTUM COMPUTERS
2y 5m to grant Granted Feb 03, 2026
Patent 12511526
METHOD FOR PREDICTING A MOLECULAR STRUCTURE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
62%
Grant Probability
81%
With Interview (+19.4%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month