Prosecution Insights
Last updated: April 19, 2026
Application No. 18/901,138

ALLOCATION RESULT DETERMINATION DEVICE AND ALLOCATION RESULT DETERMINATION METHOD

Non-Final OA §101§103§112
Filed
Sep 30, 2024
Examiner
PADOT, TIMOTHY
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Mitsubishi Electric Corporation
OA Round
1 (Non-Final)
39%
Grant Probability
At Risk
1-2
OA Rounds
3y 9m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
221 granted / 562 resolved
-12.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
39 currently pending
Career history
601
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
8.6%
-31.4% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 562 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims This communication is a First Office Action on the merits in reply to application number 18/901,138 filed on 09/30/2024. Applicant’s response filed on 02/03/2026 withdraws claims 2 and 4. Claims 1 and 3 are currently pending and have been examined. Election/Restriction Applicant’s election without traverse of claims 1 and 3 (Group I) in the reply filed on 02/03/2026 is acknowledged. Claims 2 and 4 (Group II) are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Information Disclosure Statement The information disclosure statement (IDS) filed on 09/30/2024 has been considered. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 365 is acknowledged. Continuation This application is a continuation of PCT App. No. PCT/JP2022/020003 (filed 05/12/2022). In accordance with MPEP §609.02 A. 2 and MPEP §2001.06(b) (last paragraph), the Examiner has reviewed and considered the prior art cited in the Parent Application. Also in accordance with MPEP §2001.06(b) (last paragraph), all documents cited or considered ‘of record’ in the Parent Applications are now considered cited or ‘of record’ in this application. Additionally, Applicant(s) are reminded that a listing of the information cited or ‘of record’ in the Parent Application need not be resubmitted in this application unless Applicants desire the information to be printed on a patent issuing from this application. See MPEP §609.02 A. 2. Finally, Applicants are reminded that the prosecution history of the Parent Application is relevant in this application. See e.g., Microsoft Corp. v. Multi-Tech Sys., Inc., 357 F.3d 1340, 1350, 69 USPQ2d 1815, 1823 (Fed. Cir. 2004) (holding that statements made in prosecution of one patent are relevant to the scope of all sibling patents). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 3 is directed to a method, but does not appear to actually recite any method step(s), but instead recites a device and a plurality of functions the device is configured to perform. In particular, instead of method steps, the claim recites limitations directed to a processor and memory of a device that enable the device to perform a process, rather than steps required to be executed by the method claim, such that claim 3 is directed to both a method and a device. A single claim which claims both an apparatus and the method steps of using the apparatus is indefinite under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. See In re Katz Interactive Call Processing Patent Litigation, 639 F.3d 1303, 1318, 97 USPQ2d 1737, 1748-49 (Fed. Cir. 2011). MPEP 2173.05(p). Furthermore, shifting from one statutory category to another (i.e., method to device) renders the claim scope ambiguous because it is unclear whether any steps are required to be executed by the device, or whether the claim merely requires possession of a device capable of performing the process (steps) for which it is configured. For purposes of examination, claim 3 will be interpreted as a processor-executed method claim that is required to implement each of the bodily steps/functions by executing a program stored in memory. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 3 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more. Claims 1 and 3 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance with the subject matter eligibility guidance set forth in MPEP 2106. With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106.03), it is first noted that the claimed device (claim 1) and method/device (claim 3) are each directed to at least one potentially eligible category of subject matter (i.e., machine and process/machine). Accordingly, claims 1 and 3 satisfy Step 1 of the eligibility inquiry. With respect to Step 2A Prong One of the eligibility inquiry (as explained in MPEP 2106.04), it is next noted that the claims recite an abstract idea that falls under the “Mental Processes” abstract idea grouping by reciting limitations that, but for the generic computer implementation, may be implemented mentally by a human (e.g., observation, evaluation, judgment, or opinion). The limitations reciting the abstract idea as set forth in independent claim 1 are identified in bold text below, whereas the additional elements are presented in plain text and are separately evaluated under Step 2A Prong Two and Step 2B: a processor; and a memory storing a program, upon executed by the processor, to perform a process (These are additional elements evaluated below under Step 2A Prong Two and Step 2B): to acquire a first allocation result determined at a first time and a second allocation result determined at a second time later than the first time as an allocation result indicating an allocation order for a plurality of objects to be allocated, and calculate a change cost that is an amount of increase in cost when the allocation result is changed from the first allocation result to the second allocation result (The acquiring of first/second allocation results and calculating a change cost are activities that, but for the generic processor implementation, could be implemented as mental activity such as by observation, evaluation, judgment, or opinion such as by a human observing the results and mentally evaluating the change cost, even if aided by pen and paper to perform the calculation. In addition, the “acquiring” step may be considered insignificant extra-solution activity, which is not enough to amount to a practical application (MPEP 2106.05(g)), and such extra-solution activity has also been recognized as well-understood, routine, and conventional, and thus insufficient to add significantly more to the abstract idea. See MPEP 2106.05(d) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network)); to give each of the first allocation result and the second allocation result to a learning model for reward value prediction, acquire a first reward value indicating a degree of quality of the first allocation result and a second reward value indicating a degree of quality of the second allocation result from the learning model, and predict a reward value difference between the first reward value and the second reward value by subtracting the first reward value from the second reward value (The step for giving of the first/second allocation results to the model is activity that, but for the generic processor implementation, could be implemented as mental activity such as by observation, evaluation, judgment, or opinion, such as by a human writing the results with the aid of pen and paper. In addition, the acquiring of a first/second reward value may be considered insignificant extra-solution activity, which is not enough to amount to a practical application or to add significantly more for substantially the same reasons as noted above); to select the first allocation result or the second allocation result on a basis of a change cost calculated; and to calculate the first reward value by giving the first allocation result to a reward function, calculate the second reward value by giving the second allocation result to the reward function, and calculate a reward value difference between the first reward value and the second reward value by subtracting the first reward value from the second reward value (The selecting or the first or second allocation result and calculation of the first/second reward values and a difference between them are activities that, but for the generic processor implementation, could be implemented as mental activity such as by observation, evaluation, judgment, or opinion, such as with the aid of pen and paper), wherein the process updates the learning model so as to decrease a difference between the reward value difference that has been predicted and the reward value difference calculated (The updating of the learning model sets forth activity that, but for the generic processor implementation, could be implemented as mental activity such as by observation, evaluation, judgment, or opinion, such as with the aid of pen and paper), and the process selects the second allocation result when the reward value difference is larger than 0 and the change cost is smaller than or equal to a cost threshold, and selects the first allocation result otherwise (The selecting of the second allocation result or the first allocation result are considered activities that, but for the generic processor implementation, could be implemented as mental activity such as by observation, evaluation, judgment, or opinion to perform the selection based on the value difference). Independent claim 3 recites similar limitations as those set forth in claim 1 as discussed above, and have therefore been determined to recite the same abstract idea as claim 1. With respect to Step 2A Prong Two of the eligibility inquiry (as explained in MPEP 2106.04(d)), the judicial exception is not integrated into a practical application. Independent claims 1 and 3 include additional elements directed to a processor and a memory storing a program executed by the processor. The additional elements have been evaluated, but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment (generic computing environment). See MPEP 2106.05(f) and 2106.05(h). Even if the acquire steps are considered additional elements, this activity at most amounts to insignificant extra-solution data gathering activity accomplished via receiving/transmitting data, which is not enough to amount to a practical application. See MPEP 2106.05(g). Lastly, even if the learning model is interpreted as invoking machine learning and evaluated as an additional element, the (machine) learning itself is recited at a high level of generality and involves steps that, as recited, could performed mentally or with the aid of pen and paper, does not include sufficient details of actual machine-learning that tend to show it requires anything other than generic computer implementation, and does not improve upon on machine-learning, the computer, or any technology or otherwise integrate the claim into a practical application Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B of the eligibility inquiry (as explained in MPEP 2106.05), it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Independent claims 1 and 3 include additional elements directed to a processor and a memory storing a program executed by the processor. These additional elements have been evaluated, but fail to add significantly more to the claims because they amount to using generic computing elements or instructions/software to perform the abstract idea, which merely serves to tie the abstract idea to a particular technological environment (generic computing environment), similar to adding the words “apply it” (or an equivalent) (See, e.g., Spec. at par. [0021],” noting for example that “The computer means hardware that executes the program, and may be, for example, a central processing unit (CPU), central processor, processing unit, computing unit, microprocessor, microcomputer, processor, or digital signal processor (DSP)”). Accordingly, the generic computer implementation merely serves to link the use of the judicial exception to a particular technological environment and therefore does not amount to significantly more than the abstract idea itself. See, e.g., Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976; Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Even if the acquire steps are considered additional elements, this activity at most amount to insignificant extra-solution activity accomplished via receiving/transmitting data, which is well-understood, routine, and conventional activity and thus insufficient to add significantly more to the claims. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Lastly, even if the learning model is interpreted as invoking machine learning and evaluated as an additional element, the learning is recited at a high level of generality and involves steps that, as recited, could otherwise be performed mentally or with the aid of pen and paper, and it is further noted that machine learning models are considered well-understood, routine, and conventional in the art, and therefore does not add significantly more to the claims. See, e.g., You et al, US 2012/0191531 (par. 37: “model 514 may comprise, for example, a model obtained using any of a variety of well-known machine learning techniques”). See also, Chickering et al., US Pat. No. 6,831,663 (col. 9, lines 53-58: “obtaining a probabilistic model 300, such as by learning or creating one using conventional machine learning techniques”). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 3 are rejected under 35 U.S.C. §103 as unpatentable over Wulf et al. (US 2023/0273624, hereinafter “Wulf”) in view of Mitra et al. (US 2020/0257968, hereinafter “Mitra”) in view of Wiegman (US 2021/0309392). Claims 1/3: As per claim 1, Wulf teaches an allocation result determination device (par. 8: computer-implemented method for managing task assignments) comprising: a processor; and a memory storing a program, upon executed by the processor, to perform a process (pars. 68-70 and Fig. 3: may include a central processing unit (“CPU”) 302, in the form of one or more processors, for executing program instructions. The computer 300 may include an internal communication bus 308, and a storage unit 306 (such as ROM, HDD, SDD, etc.) that may store data on a computer readable medium; computer 300 may also have a memory 304 (such as RAM) storing instructions 324 for executing techniques presented herein): to acquire a first allocation result determined at a first time and a second allocation result determined at a second time later than the first time as an allocation result indicating an allocation order for a plurality of objects to be allocated (pars. 8-10 and 24: receiving, via one or more processors of an assignment engine system, first state data that includes state information for at least one haul truck…receiving via the one or more processors of the assignment engine system, second state data that includes state information for the mine site [i.e., first allocation result received, then second allocation result received], wherein the state information for the mine site is indicative of a plurality of tasks available [objects to be allocated] in the mine site and task material weight data associated with the plurality of tasks; assigning, via the one or more processors of the assignment engine system, at least one task from amongst the plurality of tasks to the at least one haul truck of the fleet by inputting the first state data and the second state data into a trained reinforcement-learning model [allocation result indicating an allocation order], wherein: the trained reinforcement-learning model has been trained, based on training first state data and training second state data, to learn an assignment policy that optimizes a reward function for the mine site, such that the trained reinforcement-learning model is configured to apply the learned assignment policy), and calculate a change cost… (pars. 10, 20, 58, 88, and 100: determine the cumulative value of the actions [i.e., change cost], which may be fed back into the model to reinforce behaviors that had a positive effect on the cumulative benefit and/or de-emphasize behaviors that had a negative effect); to select the first allocation result or the second allocation result on a basis of a change cost calculated (pars. 10, 20, 58, 88, and 100: e.g., Results of the actions taken by the model are used with the reward function to determine the cumulative value of the actions [i.e., change cost], which may be fed back into the model to reinforce behaviors that had a positive effect on the cumulative benefit and/or de-emphasize behaviors that had a negative effect. In one example, environment information is, via a current version of the policy, associated with one or more actions. The actions are performed, resulting in a change to the environment; reinforcement-learning model 228 may be usable to determine an objective score after the completion of one or more assigned tasks using various factors based on data such as the mine site state information 206, and/or the haul truck state information 226 from a point in time after the tasks are complete; apply the learned assignment policy to input first state data for the fleet and second state data for the mine site to select at least one task to assign to the fleet, wherein the training first state data includes training haul truck haul weight data and the training second state data includes training task material weight data, such that the learned policy accounts for haul truck performance variance due to changing haul truck haul weight; and an assignment engine); and, wherein the process selects the second allocation result when the reward value difference is larger than 0 and the change cost is smaller than or equal to a cost threshold, and selects the first allocation result otherwise (par. 20: policy of a reinforcement-learning model is trained using a reward function, e.g., an objective classification for cumulative benefit to the system. Results of the actions taken by the model are used with the reward function to determine the cumulative value of the actions, which may be fed back into the model to reinforce behaviors that had a positive effect [i.e., reward value difference larger than 0] on the cumulative benefit and/or de-emphasize behaviors that had a negative effect). Wulf does not explicitly teach: calculate a change cost that is an amount of increase in cost when the allocation result is changed from the first allocation result to the second allocation result; to give each of the first allocation result and the second allocation result to a learning model for reward value prediction, acquire a first reward value indicating a degree of quality of the first allocation result and a second reward value indicating a degree of quality of the second allocation result from the learning model, and predict a reward value difference between the first reward value and the second reward value by subtracting the first reward value from the second reward value; to calculate the first reward value by giving the first allocation result to a reward function, calculate the second reward value by giving the second allocation result to the reward function, and calculate a reward value difference between the first reward value and the second reward value by subtracting the first reward value from the second reward value; the process updates the learning model so as to decrease a difference between the reward value difference that has been predicted and the reward value difference calculated. Mitra teaches: calculate a change cost that is an amount of increase in cost when the allocation result is changed from the first allocation result to the second allocation result (Mitra at par. 61: determine a change in the state of the shared compute infrastructure 130 occurring as a result of performing the scheduling action and responsively calculate a reward or penalty [i.e., change cost] based on the change in state); to give each of the first allocation result and the second allocation result to a learning model for reward value prediction, acquire a first reward value indicating a degree of quality of the first allocation result and a second reward value indicating a degree of quality of the second allocation result from the learning model, and predict a reward value difference between the first reward value and the second reward value by subtracting the first reward value from the second reward value (pars. 31, 39, 44, 62, and 114: reinforcement learning agent attempts to maximize the received reward (or minimize the received penalty) to iteratively learn the optimized scheduling policy; reinforcement learning agent predicts a scheduling action, observes a change in a state of the shared resources (or infrastructure) occurring as a result of performing the scheduling action, calculates a reward or penalty based on the change in state, and uses the reward or penalty to further learn (train or refine) the scheduling policy to maximize future rewards or minimize future penalties; The RL-Agent 123 responsively adjusts the scheduling policy to maximize a future reward R.sub.t+1 (or minimize a negative reward) which facilitates the iterative reinforcement learning process); to calculate the first reward value by giving the first allocation result to a reward function, calculate the second reward value by giving the second allocation result to the reward function, and calculate a reward value difference between the first reward value and the second reward value by subtracting the first reward value from the second reward value (pars. 39 and 61: e.g., calculates a reward or penalty based on the change in state, and uses the reward or penalty to further learn (train or refine) the scheduling policy to maximize future rewards or minimize future penalties; for example, the reward/penalty generation module 126 can determine a change in the state of the shared compute infrastructure 130 occurring as a result of performing the scheduling action and responsively calculate a reward or penalty based on the change in state. As discussed herein, the reward or penalty can be a summation of multiple components including at least a resource contention component, a resource over utilization component, and a scheduling delay component). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wulf with Mitra because the references are analogous since they are each directed to features for employing reinforcement learning to improve scheduling/assignment of resources, which is within Applicant’s field of endeavor of object allocations using reward values and a learning model, and because modifying Wulf with Mitra’s change cost calculation and allocation based on reward value difference, as claimed, would serve the motivation to refine or improve the reinforcement learning model (Wulf at par. 54) and improve performance of a self-learning scheduler (Mitra at p ar. 8); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Wulf and Mitra do not explicitly teach: the process updates the learning model so as to decrease a difference between the reward value difference that has been predicted and the reward value difference calculated. Wiegman teaches: the process updates the learning model so as to decrease a difference between the reward value difference that has been predicted and the reward value difference calculated (pars. 61 and 112: machine-learning model may include one or more autonomous machine-learning processes such as supervised, unsupervised, or reinforcement machine-learning; minimize the square of the difference between predicted outcomes and actual outcomes according to an appropriate norm for measuring such a difference (e.g. a vector-space distance norm); coefficients of the resulting linear equation may be modified to improve minimization). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wulf/Mitra with Wiegman because the references are analogous since they are each directed to features for employing reinforcement learning to improve scheduling/assignment actions, which is within Applicant’s field of endeavor of object allocations using reward values and a learning model, and because modifying Wulf/Mitra with Wiegman’s technique for updating a learning model to decrease a difference between a predicted and calculated reward value, as claimed, would serve the motivation to refine or improve the reinforcement learning model (Wulf at par. 54) and improve performance of a self-learning scheduler (Mitra at p ar. 8); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 3 is directed to a method for performing substantially similar limitations as those recited in claim 1 and addressed above. Wulf, in view of Mitra/Wiegman, teaches a method for performing the limitations discussed above (Wulf at par. 19: methods and systems; See also, Mitra at par. 8: systems, methods, and non-transitory computer readable media; See also, Wiegman at par. 2: methods and systems), and claim 3 is therefore rejected using the same references and for substantially the same reasons as set forth above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Sasaki et al. (US 2019/0087751): discloses reinforcement learning techniques for controlling objects in accordance with a minimization/maximization value function and costs related thereto (at least pars. 3-4). Kim (US 2021/0211111): discloses features for adjusting information through reinforcement learning, including adjusting weights/bias of a learning model (par. 235). Lopez Leones et al. (US 2022/0139232): discloses features for predicting flight data, including updating models to improve prediction accuracy of temporal data (pars. 49-52). D. S. Kieckbusch et al., "Negotiation Approach by Reinforcement Learning for Takeoff Sequencing Decision in Airports," 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 2019, pp. 4477-4482: discloses features for employing reinforcement learning techniques to manage air traffic flow. C. Strottmann Kern et al., "Data-driven aircraft estimated time of arrival prediction," 2015 Annual IEEE Systems Conference (SysCon) Proceedings, Vancouver, BC, Canada, 2015, pp. 727-733: discloses features for enhancing aircraft ETA predictions. S. Khanmohammadi et al., "A systems approach for scheduling aircraft landings in JFK airport," 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Beijing, China, 2014, pp. 1578-1585: discloses features for scheduling airport landings, including a framework that integrates computational intelligence techniques using an adaptive network based fuzzy inference system to predict flight delays and a fuzzy decision making procedure for scheduling the aircraft landings. Any inquiry of a general nature or relating to the status of this application or concerning this communication or earlier communications from the Examiner should be directed to Timothy A. Padot whose telephone number is 571.270.1252. The Examiner can normally be reached on Monday-Friday, 8:30 - 5:30. If attempts to reach the examiner by telephone are unsuccessful, the Examiner’s supervisor, Brian Epstein can be reached at 571.270.5389. The fax phone number for the organization where this application or proceeding is assigned is 571- 273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /TIMOTHY PADOT/ Primary Examiner, Art Unit 3625 02/24/2026
Read full office action

Prosecution Timeline

Sep 30, 2024
Application Filed
Feb 24, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586094
AUTOMATIC EXPERIENCE RESEARCH WITH A USER PERSONALIZATION OPTION METHOD AND APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12586111
TRANSACTION AND RECEIPT BASED ALERT AND NOTIFICATION SYSTEM AND TECHNIQUES
2y 5m to grant Granted Mar 24, 2026
Patent 12586118
SYSTEMS AND METHODS FOR SURPRISE OBJECT DISTRIBUTION
2y 5m to grant Granted Mar 24, 2026
Patent 12561631
WORK MANAGEMENT SYSTEM, CALIBRATION WORK MANAGEMENT SERVER, AND CALIBRATION WORK MANAGEMENT METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12548037
Forward Context Browsing
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
39%
Grant Probability
67%
With Interview (+28.1%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 562 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month