Prosecution Insights
Last updated: April 19, 2026
Application No. 17/080,612

System And Method For Reinforcement-Learning Based On-Loading Optimization

Final Rejection §101§112
Filed
Oct 26, 2020
Examiner
HEFLIN, BRIAN ADAMS
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Genpact Usa Inc.
OA Round
8 (Final)
41%
Grant Probability
Moderate
9-10
OA Rounds
3y 1m
To Grant
74%
With Interview

Examiner Intelligence

Grants 41% of resolved cases
41%
Career Allow Rate
84 granted / 205 resolved
-11.0% vs TC avg
Strong +33% interview lift
Without
With
+33.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
232
Total Applications
across all art units

Statute-Specific Performance

§101
35.6%
-4.4% vs TC avg
§103
34.3%
-5.7% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
17.0%
-23.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 205 resolved cases

Office Action

§101 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claim(s) Claim(s) 1-24 were previously pending and were rejected in the previous office action. Claim(s) Claim(s) 1, 6, 13, and 18 were amended. Claim(s) 2-5, 7-12, 14-17, and 19-24 were left as originally/previously presented. Claim(s) 1-24 are currently pending and have been examined. Response to Arguments Claim Rejections - 35 USC § 112 Applicant’s arguments, see page(s) 14-15 of Applicant’s Response, filed September 23, 2025, with respect to the rejection under 35 U.S.C. 112(a) has been fully considered but they are not persuasive. Applicant argues, on page(s) 14-15 that applicant’s specification paragraph(s) 0020 and 0022 provides additional support for “…consequently determines an optimal or near-optimal vehicle using a second amount of computational resources lesser than the first amount of computational resources…,” in the original application. Examiner, respectfully, disagrees. As an initial matter, with respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims. See, e.g., Hyatt v. Dudas, 492 F.3d 1365, 1370, n.4, 83 USPQ2d 1373, 1376, n.4 (Fed. Cir. 2007) (citing MPEP § 2163.04 which provides that a "simple statement such as ‘applicant has not pointed out where the new (or amended) claim is supported, nor does there appear to be a written description of the claim limitation ‘___’ in the application as filed’ may be sufficient where the claim is a new or amended claim, the support for the limitation is not apparent, and applicant has not pointed out where the limitation is supported."); see also MPEP §§ 714.02 and 2163.06 ("Applicant should ... specifically point out the support for any amendments made to the disclosure."); and MPEP § 2163.04 ("If applicant amends the claims and points out where and/or how the originally filed disclosure supports the amendment(s), and the examiner finds that the disclosure does not reasonably convey that the inventor had possession of the subject matter of the amendment at the time of the filing of the application, the examiner has the initial burden of presenting evidence or reasoning to explain why persons skilled in the art would not recognize in the disclosure a description of the invention defined by the claims."). The inquiry into whether the description requirement is met is a question of fact that must be determined on a case-by-case basis. AbbVie Deutschland GmbH & Co., KG v. Janssen Biotech, Inc., 759 F.3d 1285, 1297, 111 USPQ2d 1780, 1788 (Fed. Cir. 2014) ("Whether a patent claim is supported by an adequate written description is a question of fact."); In re Smith, 458 F.2d 1389, 1395, 173 USPQ 679, 683 (CCPA 1972) ("Precisely how close [to the claimed invention] the description must come to comply with Sec. 112 must be left to case-by-case development."); In re Wertheim, 541 F.2d at 262, 191 USPQ at 96 (inquiry is primarily factual and depends on the nature of the invention and the amount of knowledge imparted to those skilled in the art by the disclosure), see MPEP 2163. Here, in this case the originally filed specification, paragraph 0020, teaches solutions for vehicle and space allocation problems can be solved. Arbitrary number of different types of enclosures, arbitrary numbers of enclosures of different types, an arbitrary number of vehicle types, and arbitrary numbers of vehicles of different types, the vehicle selection and/or space allocation problems can become unsolvable for the computing system because the system run time increase exponentially with increases in the total number of variables in the problem to be solved or in the number of possible solutions that must be explored. Applicant’s originally filed speciation, paragraph 0022, teaches machine-learning module explores different potential solutions to these two problems and during the exploration, earns to find a feasible and potentially optimized solution in an efficient manner, i.e., without exceeding the processing and memory constraints. The learning is guided by a reward/penalty model, where the decisions by the RL module may lead to a feasible/optimized solution are rewarded and the decisions that may lead to an infeasible or unoptimized solution are penalized or are rewarded less than other decisions. These above paragraphs at best provide that the computer system is able to use a machine learning model, through penalizing infeasible vehicle and space decisions. The machine learning module is able to determine an optimal decision which is rewarded and the decisions that lead to an infeasible or unoptimized solution are penalized or are rewarded less than other decisions. Thus, merely showing a machine learning model is used to determine optimal solutions, which the optimal solutions are rewarded and those solutions that are not optimal are penalized or rewarded less is not enough to show a second amount of computational resources are lesser than the first amount of computational resources (i.e., smaller allocation of processing power, memory or other computing assets are lesser than a first set of processing power, memory, or other computing assets). Thus, applicant’s arguments are not persuasive and applicant’s limitation fails to comply with the written description requirement under 35 USC 112(a). Claim Rejections - 35 USC § 101 Applicant’s arguments, see page(s) 15-17, of Applicant’s Response, filed September 23, 2025, with respect to 35 USC § 101 rejection of Claim(s) 1-24, have been fully considered but they are not persuasive. First, Applicant argues, on page 15-16, that the amended Independent Claim(s) 1, 6, 13, and 18, do not fall within the revised Step 2A prong one framework under certain methods of organizing human activity concepts. Examiner, respectively, disagrees with applicant’s arguments. Courts have provided various sub groupings within organizing human activity grouping encompass both activity of a single person (for example, a person following a set of instructions or a person signing a contract online) and activity that involves multiple people (such as a commercial interaction), and thus, certain activity between a person and a computer (for example a method of anonymous loan shopping that a person conducts using a mobile phone) may fall within the "certain methods of organizing human activity" grouping. It is also noted that the number of people involved in the activity is not dispositive as to whether a claim limitation falls within this grouping. Instead, the determination should be based on whether the activity itself falls within one of the sub-groupings, see MPEP 2106.04(a)(2)(II). Examiner, respectfully, notes that the specific limitation(s) that fall within the subject matter groupings of the abstract idea are recited as Independent Claim(s) 1 and 6 recites “obtaining a specification of a load comprising a plurality of enclosures of a plurality of enclosure types,” “obtaining specifications of a plurality of vehicles of plurality of vehicle types,” “training a model using a set of training data in an environment for simulating vehicle loading, wherein the set of training data is generated from a state of the environment, an observation of the environment, and a reward received form the environment, the state includes numbers and types of available vehicles in the environment, the observation includes numbers and types of enclosures remaining to be loaded, and the reward is related to a cost of the selected vehicle, and the reward is used to guide the model to learn an action policy,” “selecting from the plurality of vehicles, a vehicle for transporting or storing the load using the model,” “determining available space of the plurality of vehicles, each available space including one or more discontiguous volumes and associated weight capacity,” “determining one or more candidate vehicles from (i) matching three-dimensional (3D) coordinates of available space of the plurality of vehicles to sizes and orientations of each object in a portion of the load, and (ii) matching other specifications between the portion of the load and each candidate vehicle, wherein the candidate vehicle has space available to accommodate at least the portion of the load, and matching the 3D coordinates includes identifying and removing any matching but inaccessible space,” “wherein determining an optimal candidate vehicle requires a first amount of computational resources to computationally evaluate all possible combinations of each vehicle, sizes and orientations of the load, and other specifications associated with the load and each vehicle,” “identifying an immediate reward and a long-term consequence of taking each candidate vehicle, the long-term consequence including expected rewards represented by future expected states and expected vehicle selections,” “selecting the vehicle from the one or more candidate vehicles using the action policy based on the immediate reward and long-term consequence of taking each candidate vehicle,” “wherein the selecting prioritizes the vehicle and the expected vehicle selections while penalizing other suboptimal vehicle selection combinations,” “wherein the model, through penalizing the other suboptimal vehicle selection combinations using reward-guided learning, requires evaluation of fewer than all possible combinations of each vehicle, sizes and orientations of the load, and other specifications associated with the load and each vehicle, and consequently determines an optimal or near-optimal vehicle using a second amount of computational resources lesser than the first amount of computational resources,” “in response to selecting the vehicle, updating the set of training data by updating the reward, the state and the observation of the environment based on a remaining load,” “using the selected vehicle in model training of space allocation to allocate space of the selected vehicle to the load,” and “repeating steps (c)-(f) to simultaneously (i) maximize a first cumulative reward to optimize vehicle selection and (ii) maximize a second cumulative reward to optimize space allocation, wherein the first and second maximized cumulative reward indicates an optimal vehicle selection or space allocation achieved in the environment with arbitrary states, observations, and rewards,” step(s)/function(s) are merely certain methods of organizing human activity: fundamental economic practices or principles, commercial or legal interactions (e.g., business relations) and/or managing personal behavior or relationships or interactions between people (e.g., including social activities and/or following rules or instructions). Independent Claim(s) 13 and 18 recites “obtaining a specification of a load comprising a plurality of enclosures of a plurality of enclosure types,” “obtaining a specification comprising a set of dimensions representing a plurality of spaces within the vehicle that are available for on-loading,” “training a model using a set of training data in an environment for simulating loading of the vehicle, wherein the set of training data is generated from a state of the environment, an observation of the environment, a reward received from the environment, and the state includes dimensions of a chosen enclosure, a current filled state of the vehicle and available space in the vehicle, the observation includes numbers and types of enclosures remaining to be loaded, and the reward is related to a change of placement spaces within the vehicle, and the reward is using to guide the model to learn an action policy,“ “selecting a location within the plurality of spaces for placement using the model,” “determining available space of the plurality of spaces, each available space including one or more discontiguous volumes and associated weight capacity,” “determining one or more candidate location selection from (i) tracking the location within the plurality of spaces in the vehicle, (ii) identifying and removing any matching but inaccessible space, and (iii) constructing three-dimensional coordinates of remaining available space of the vehicle for placement of remaining enclosures based on one or more of position, orientation, and alignment of the chosen enclosure,” “wherein determining an optimal candidate location requires a first amount of computational resources to computationally evaluate all possible combinations of each location, sizes, and orientations of the load, and other specifications associated with the load and each location,” “identifying an immediate reward and a long-term consequence of selecting each candidate location the long-term consequence including expected rewards represented by future expected states and expected location selections,” “selecting the location from the one or more candidate location using the action policy based on the immediate reward and long-term consequence of selecting each candidate location,” “wherein the selecting prioritizes the location and the expected location selections while penalizing other suboptimal location selection combinations,” “wherein the model through penalizing the other suboptimal location selection combinations using reward-guided learning, requires evaluation of fewer than all possible combinations of each location, sizes and orientations of the load, and other specifications associated with the load and each location, and consequently determines an optimal or near-optimal location using a second amount of computational resources lesser than the first amount of computational resources,” “in response to selecting the location, updating the set of training data by updating the reward, the state, and the observation of the environment based on determining one or more position, orientation, and alignment of enclosures remaining to be loaded,” “using the selected location in model training of vehicle allocation to allocate the vehicle,” and “repeating by the steps (c)(A)-(c)(D) to simultaneously (i) maximize a first cumulative reward to optimize space selection and (ii) maximize a second cumulative reward to optimize vehicle selection, wherein the first or second maximized cumulative reward indicates an optimal space allocation achieved in the environment with arbitrary states, observations, and rewards,” step(s)/function(s) are merely certain methods of organizing human activity: fundamental economic practices or principles, commercial or legal interactions (e.g., business relations) and/or managing personal behavior or relationships or interactions between people (e.g., including social activities and/or following rules or instructions). Similar to, Credit Acceptance Corp v, Westlake Services, where the court found that that processing a credit application between a customer and dealer, where the business relation is the relationship between the customer and the dealer during the vehicle purchase was merely a commercial transaction, which, is a form of certain methods of organizing human activity. In this case, the claim(s) are similar to a business relationship between an entity and a customer, which, the entity collects vehicle information and load information, which the entity is then able to analyze that information to select a vehicle with available space and selecting the vehicle that has the available space for the shipment load. The entity can then continue to select an available vehicle and load that is able to be stored in a vehicle until an maximized optimized cost is determined thus the claims are directed to the abstract idea of a business relation such as determining and matching available vehicles for storing and transporting goods (e.g., logistical and scheduling loads with vehicles), which will be used to reduce a cost for the selected vehicle. Also, similar to, In re Maucorps, where the court found that using an algorithm for determining the optimal number of visits by a business representative to a client is merely a commercial or legal interaction, see MPEP 2106.04(a)(2)(II)(B). In this case, applicant limitations are merely commercial or legal interactions when the limitations use a machine learning model to determine an optimal vehicle and allocation of spaces to store physical objects since the determination of optimal number of vehicles and/or spaces for a transportation service using a machine learning model is merely a commercial or legal interaction thus falling within certain methods of organizing human activity. Therefore, applicant’s claims fall within at least the enumerated grouping of certain methods of organizing human activity. Also, even assuming arguendo, that applicant has some merit that the claims cannot be performed within the grouping of certain methods of organizing human activity. The courts have provided when determining whether a claim recites a mathematical concept (i.e., mathematical relationships, mathematical formulas or equations, and mathematical calculations), examiners should consider whether the claim recites a mathematical concept or merely limitations that are based on or involve a mathematical concept. It is also important to note that a mathematical concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula." Similar to, In re Maucorps, when the court found that using an algorithm for determining the optimal number of visits by a business representative to a client was a mathematical calculation. Here in this case, the limitations recite a mathematical calculation when a machine-learning model to determine an optimal vehicle and cost for storing a load within an available space of the available vehicle since using a machine model for determining an optimal number of vehicles and cost to transport a number of loads thus at best merely recites a mathematical calculation. Also, the Claims are merely taking existing information and identifying relationships to generate additional information, which the focus on applicant’s claims are merely selecting certain information, analyzing that information, and then outputting those results based on the information to then reduce a price for the selected vehicle thus at the very least training mathematical models by identifying relationships among numerical data and using the outputs of those models to price a selected vehicle. Therefore, the claims are merely taking a set of numerical data points and analyzing them to create model(s), which at the very least is numerical and financial data (e.g., reward(s)) thus abstract. See, organizing information and manipulating information through mathematical correlations, Digitech Image Techs., LLC v. Electronics for Imaging, Inc., 758 F.3d 1344, 1350, 111 USPQ2d 1717, 1721 (Fed. Cir. 2014); using a formula to convert geospatial coordinates into natural numbers, Burnett v. Panasonic Corp., 741 Fed. Appx. 777, 780 (Fed. Cir. 2018)(non-precedential); and MPEP 2106.04(a)(2). Thus, examiner disagrees with applicant’s argument(s) and applicant’s claims fall within at least the enumerated grouping of mathematical concepts. Second, applicant argues, on page(s) 16-17 in applicant’s arguments, that the application is now integrated into a practical application. Examiner, respectfully, disagrees with applicant’s arguments. As an initial matter, it is important to note that first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. Second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. That is, the claim includes the components or steps of the invention that provide the improvement described in the specification. The claim itself does not need to explicitly recite the improvement described in the specification (e.g., "thereby increasing the bandwidth of the channel"), see MPEP 2106.04(d)(1). An important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107. In this respect, the improvement consideration overlaps with other considerations, specifically the particular machine consideration (see MPEP § 2106.05(b)), and the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a claim satisfies the improvement consideration. Here, in this the specification discloses the reinforcement learning (RL), where a machine-learning module explores different potential solutions to these two problems and, during the exploration, learns to find a feasible and potentially optimized solution in an efficient manner, i.e., without exceeding the processing and memory constraints, see applicant’s specification Paragraph 0022. This is at best an improvement to the abstract idea itself rather than a technological improvement. First, the step(s) of accomplishing this desired improvement in the specification is made in blanket conclusory manner by merely providing the machine learning module explores different potential solutions to these two problems and, during the exploration, learns to find a feasible and potentially optimized solution in an efficient manner, Paragraph 0022, thus when the specification states the improvement in a conclusory manner the examiner should not determine the claim improves technology. Also, another important consideration in determining whether a claim improves technology is the extent to which the claim covers a particular solution to a problem or a particular way to achieve a desired outcome, as opposed to merely claiming the idea of a solution or outcome. McRO, 837 F.3d at 1314-15, 120 USPQ2d at 1102-03; DDR Holdings, 773 F.3d at 1259, 113 USPQ2d at 1107. In this respect, the improvement consideration overlaps with other considerations, specifically the particular machine consideration (see MPEP §2106.05(b)), and the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a claim satisfies the improvement consideration. Similar to, Affinity Labs v. DirecTv., the court has held that the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. Here, in this case applicant’s limitations merely obtaining, obtaining, training, generating, learning, selecting, determining, determining, evaluating, matching, matching, identifying, selecting, prioritizing, penalizing, determining, updating, and repeating, respectively, selecting available vehicles for transporting and/or storing goods using computer components that operate in their ordinary capacity (e.g., a computing system, a machine learning model, a processor, a memory, and an agent module), which are no more than “applying,” the judicial exception. Examiner, further, notes that a recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). In this case, applicant has merely claimed the result of accomplishing the problem when the applicant has merely provided that vehicles are selected based on determining space allocation using a ML model to reduce computational resources. At best, the benefits may be achieved by the use of a computer and/or module for making these determinations. However, there is nothing in the claim(s) or specification as to how the computer is able to reduce computational resources and how that is used to prevent a computer from running out of memory to increase memory constraints. The limitations as currently claimed are merely result-orientated thus this is the equivalent to the words of “apply it.” Furthermore, similar to, Intellectual Ventures I LLC v. Capital One Bank, the court provided that merely “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer,” does not integrate a judicial exception into a practical application or provide an inventive concept. In this case, the judicial exception is not integrated into a practical application when model and computational efficiency is improved by training a model to learn to solve vehicle selection and space allocation, see applicant’s specification paragraph(s) 0002, 0006, and 0022, since the appending generic computer functionality merely lends to speed or efficiency to the performance of an abstract concept doesn’t meaningfully limit the claim(s) thus as a whole applicant’s limitations merely describe how to generally “apply,” the concept(s) of an existing process of determining and selecting vehicle available spaces for loading packages thus at best are mere instructions to apply the exception. Therefore, applicant’s arguments are not persuasive. Third, Applicant argues on page 17 of applicants’ arguments, that the Claims are not well-understood, routine, or conventional activity and amount to significantly more than the abstract idea. Examiner, respectfully, disagrees with applicants argument. As an initial matter, although the conclusion of whether a claim is eligible at Step 2B requires that all relevant considerations be evaluated, most of these considerations were already evaluated in Step 2A Prong Two. Thus, in Step 2B, examiners should: (1) Carry over their identification of the additional element(s) in the claim from Step 2A Prong Two; (2) Carry over their conclusions from Step 2A Prong Two on the considerations discussed in MPEP §§ 2106.05(a) - (c), (e) (f) and (h): (3) Re-evaluate any additional element or combination of elements that was considered to be insignificant extra-solution activity per MPEP § 2106.05(g), because if such re-evaluation finds that the element is unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, this finding may indicate that the additional element is no longer considered to be insignificant; and (4) Evaluate whether any additional element or combination of elements are other than what is well-understood, routine, conventional activity in the field, or simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP § 2106.05(d), see MPEP 2106.5(B)(II). Examiner respectfully notes that in the Non-Final Office Action mailed 04/23/2025 on page(s) 10-17 and 24-26, the Step 2B prong was used to analysis the previous Step 2A Prong Two additional elements that merely amounted to describing how to generally “apply,” the abstract idea in a computer environment thus Examiner carried over the identification of the additional elements and conclusions of the additional elements that were analyzed under Step 2A Prong Two, which the analysis also explained how the limitations were not an improvement to the technology. As stated above, any claim elements that were identified as insignificant extra-solution activity should be reevaluated under Step 2B for determining if they are well-understood, routine, and conventional. Similar to, Affinity Labs v. DirecTv., the court has held that the use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. Here, in this case applicant’s limitations merely obtaining, obtaining, training, generating, learning, selecting, determining, determining, evaluating, matching, matching, identifying, selecting, prioritizing, penalizing, determining, updating, and repeating, respectively, selecting available vehicles for transporting and/or storing goods using computer components that operate in their ordinary capacity (e.g., a computing system, a machine learning model, a processor, a memory, and an agent module), which are no more than “applying,” the judicial exception. It should also be noted that when making a determination whether the additional elements in a claim amount to significantly more than a judicial exception, the examiner should evaluate whether the elements define only well-understood, routine, conventional activity. In this respect, the well-understood, routine, conventional consideration overlaps with other Step 2B considerations, particularly the improvement consideration (see MPEP § 2106.05(a)), the mere instructions to apply an exception consideration (see MPEP § 2106.05(f)), and the insignificant extra-solution activity consideration (see MPEP § 2106.05(g)). Thus, evaluation of those other considerations may assist examiners in making a determination of whether a particular element or combination of elements is well-understood, routine, conventional activity, see MPEP 2106.05(d). In this case, examiner provided why these limitations are not sufficient to show an improvement (e.g., Affinity Labs v. DirecTv; Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015); and Intellectual Ventures I v. Capital One Fin. Corp.) and how the limitations amount to mere instructions to apply an exception, see the above analysis in the argument section(s). Thus, the claims do not provide an improvement to the vehicle selection optimization. Therefore, applicants’ argument is not persuasive. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim(s) 1-24 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Applicant has amended Independent Claim(s) 1, 6, 13, and 18, to recite “…consequently determines an optimal or near-optimal vehicle using a second amount of computational resources lesser than the first amount of computational resources….,” Examiner, respectfully, notes that the specification, drawings, and original claims lack written description for determining an optimal or near-optimal vehicle that uses a second amount of computational resources lesser than the first amount of computational resources. Applicant’s specification at best teaches using a machine learning module to learn a feasible and potentially optimized solution in an efficient manner, i.e., without exceeding the processing and memory constraints. The machine learning module can select the least costly vehicle solution. The machine learning decisions can lead to a feasible/optimized solution are rewarded and the decisions that may lead to an infeasible or unoptimized solution are penalized or are rewarded less than other decisions, see applicant’s specification paragraph(s) 0005, 0020, 0022, and 0035. However, in this case, the limitations provide the system can determine an optimal vehicle based on a reward, which the system can make these determinations more efficiently. The system can reward the optimal decisions and the decisions that are not feasible are penalized or rewarded less. However, as pointed out above the system does not determine an optimal vehicle using a second amount of computational resources that are lesser than the first amount of computational resources (i.e., a smaller allocation of processing power, memory or other computing assets are lesser than a first set of processing power, memory, or other computing assets) . Examiner, respectfully, notes that applicant has not provided sufficient description in the specification, original claim(s), and/or drawings as to the above limitation. As a result, this limitation will be considered new matter. Examiner, respectfully, notes that Dependent Claims 2-5, 7-12, 14-17, and 19-24 are also rejected based on their dependency from Independent Claim(s) 1, 6, 13, and 18. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 2A Prong 1: Independent Claim(s) 1, 6, 13, and 18 recites an entity that is able to determine container types and specifications of vehicles, which the entity will then select a vehicle type and a loading location for the containers and determine a reward for the vehicle transporting the goods. The entity will then continue to process the vehicle selection and/or loads until the load is accommodated within the available space of the one or more selected vehicles. Independent Claim(s) 1, 6, 13, and 18, as a whole recites limitation(s) that are directed to an abstract idea of certain methods of organizing human activity: fundamental economic practices or principles, commercial or legal interactions (e.g., business relations) and/or managing personal behavior or relationships or interactions between people (e.g., including social activities and/or following rules or instructions) and/or mathematical concepts (e.g., mathematical relationships and/or mathematical calculations). Independent Claim(s) 1 and 6 recites “obtaining a specification of a load comprising a plurality of enclosures of a plurality of enclosure types,” “obtaining specifications of a plurality of vehicles of plurality of vehicle types,” “training a model using a set of training data in an environment for simulating vehicle loading, wherein the set of training data is generated from a state of the environment, an observation of the environment, and a reward received form the environment, the state includes numbers and types of available vehicles in the environment, the observation includes numbers and types of enclosures remaining to be loaded, and the reward is related to a cost of the selected vehicle, and the reward is used to guide the model to learn an action policy,” “selecting from the plurality of vehicles, a vehicle for transporting or storing the load using the model,” “determining available space of the plurality of vehicles, each available space including one or more discontiguous volumes and associated weight capacity,” “determining one or more candidate vehicles from (i) matching three-dimensional (3D) coordinates of available space of the plurality of vehicles to sizes and orientations of each object in a portion of the load, and (ii) matching other specifications between the portion of the load and each candidate vehicle, wherein the candidate vehicle has space available to accommodate at least the portion of the load, and matching the 3D coordinates includes identifying and removing any matching but inaccessible space,” “wherein determining an optimal candidate vehicle requires a first amount of computational resources to computationally evaluate all possible combinations of each vehicle, sizes and orientations of the load, and other specifications associated with the load and each vehicle,” “identifying an immediate reward and a long-term consequence of taking each candidate vehicle, the long-term consequence including expected rewards represented by future expected states and expected vehicle selections,” “selecting the vehicle from the one or more candidate vehicles using the action policy based on the immediate reward and long-term consequence of taking each candidate vehicle,” “wherein the selecting prioritizes the vehicle and the expected vehicle selections while penalizing other suboptimal vehicle selection combinations,” “wherein the model, through penalizing the other suboptimal vehicle selection combinations using reward-guided learning, requires evaluation of fewer than all possible combinations of each vehicle, sizes and orientations of the load, and other specifications associated with the load and each vehicle, and consequently determines an optimal or near-optimal vehicle using a second amount of computational resources lesser than the first amount of computational resources,” “in response to selecting the vehicle, updating the set of training data by updating the reward, the state and the observation of the environment based on a remaining load,” “using the selected vehicle in model training of space allocation to allocate space of the selected vehicle to the load,” and “repeating steps (c)-(f) to simultaneously (i) maximize a first cumulative reward to optimize vehicle selection and (ii) maximize a second cumulative reward to optimize space allocation, wherein the first and second maximized cumulative reward indicates an optimal vehicle selection or space allocation achieved in the environment with arbitrary states, observations, and rewards,” step(s)/function(s) are merely certain methods of organizing human activity: fundamental economic practices or principles, commercial or legal interactions (e.g., business relations) and/or managing personal behavior or relationships or interactions between people (e.g., including social activities and/or following rules or instructions) and/or mathematical concepts (e.g., mathematical relationships and/or mathematical calculations). Independent Claim(s) 13 and 18 recites “obtaining a specification of a load comprising a plurality of enclosures of a plurality of enclosure types,” “obtaining a specification comprising a set of dimensions representing a plurality of spaces within the vehicle that are available for on-loading,” “training a model using a set of training data in an environment for simulating loading of the vehicle, wherein the set of training data is generated from a state of the environment, an observation of the environment, a reward received from the environment, and the state includes dimensions of a chosen enclosure, a current filled state of the vehicle and available space in the vehicle, the observation includes numbers and types of enclosures remaining to be loaded, and the reward is related to a change of placement spaces within the vehicle, and the reward is using to guide the model to learn an action policy,“ “selecting a location within the plurality of spaces for placement using the model,” “determining available space of the plurality of spaces, each available space including one or more discontiguous volumes and associated weight capacity,” “determining one or more candidate location selection from (i) tracking the location within the plurality of spaces in the vehicle, (ii) identifying and removing any matching but inaccessible space, and (iii) constructing three-dimensional coordinates of remaining available space of the vehicle for placement of remaining enclosures based on one or more of position, orientation, and alignment of the chosen enclosure,” “wherein determining an optimal candidate location requires a first amount of computational resources to computationally evaluate all possible combinations of each location, sizes, and orientations of the load, and other specifications associated with the load and each location,” “identifying an immediate reward and a long-term consequence of selecting each candidate location the long-term consequence including expected rewards represented by future expected states and expected location selections,” “selecting the location from the one or more candidate location using the action policy based on the immediate reward and long-term consequence of selecting each candidate location,” “wherein the selecting prioritizes the location and the expected location selections while penalizing other suboptimal location selection combinations,” “wherein the model through penalizing the other suboptimal location selection combinations using reward-guided learning, requires evaluation of fewer than all possible combinations of each location, sizes and orientations of the load, and other specifications associated with the load and each location, and consequently determines an optimal or near-optimal location using a second amount of computational resources lesser than the first amount of computational resources,” “in response to selecting the location, updating the set of training data by updating the reward, the state, and the observation of the environment based on determining one or more position, orientation, and alignment of enclosures remaining to be loaded,” “using the selected location in model training of vehicle allocation to allocate the vehicle,” and “repeating by the steps (c)(A)-(c)(D) to simultaneously (i) maximize a first cumulative reward to optimize space selection and (ii) maximize a second cumulative reward to optimize vehicle selection, wherein the first or second maximized cumulative reward indicates an optimal space allocation achieved in the environment with arbitrary states, observations, and rewards,” step(s)/function(s) are merely certain methods of organizing human activity: fundamental economic practices or principles, commercial or legal interactions (e.g., business relations) and/or managing personal behavior or relationships or interactions between people (e.g., including social activities and/or following rules or instructions) and/or mathematical concepts (e.g., mathematical relationships and/or mathematical calculations). Furthermore, as, explained in the MPEP and the October 2019 update, where a series of step(s) recite judicial exceptions, examiners should combine all recited judicial exceptions and treat the claim as containing a single judicial exception for purposes of further eligibility analysis. (See, MPEP 2106.04, 2016.05(II) and October 2019 Update at Section I. B.). For instance, in this case, Independent Claim(s) 1, 6, 13, and 18, are similar to an entity that is able to match vehicle types to container types and/or containers to locations within a vehicle, which the entity will then load the vehicle with the containers for a reward. The mere recitation of generic computer components (Claim(s) 1 and 6: a computing system and an a machine learning model; and Claim(s) 13 and 18: a processor, a memory, an agent module, and an machine learning model). Therefore, Independent Claim(s) 1, 6, 13, and 18 recites the above abstract idea(s). Step 2A Prong 2: This judicial exception is not integrated into a practical application because the claims as a whole describes how to generally “apply,” the concept(s) of “obtaining,” “obtaining,” “training,” “generating,” “learning,” “selecting,” “determining,” “determining,” “evaluating,“ “matching,” “matching,” “identifying,” selecting,” “prioritizing,” “penalizing,” “determining,” “updating,” and “repeating,” respectively, information in a computer environment, respectively. The limitations that amount to “apply it,” are as follows (Claim(s) 1 and 6: a computing system and an a machine learning model; and Claim(s) 13 and 18: a processor, a memory, an agent module, and an machine learning model). Examiner, notes that the computing system, machine learning model, processor, memory, and agent module, respectively, are recited so generically that they represent no more than mere instructions to apply the judicial exception on a computer. Similar to, Affinity Labs v. DirecTv., the court has held that certain additional elements are not integrated into a practical application or provide significantly more when the additional elements merely use a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) thus they do no more than merely invoke computers or machinery as a tool to perform an existing process, which, amounts to no more than “applying,” the judicial exception. Here, the above additional elements are not integrated into a practical application or provide significantly more when they are merely obtaining, obtaining, selecting, receiving, training, constructing, tracking, and repeating, loading containers onto a delivery vehicle which is no more than merely invoking computers or machinery as a tool to perform an existing process (e.g., determining vehicles to load containers) thus merely “applying,” the judicial exception. Also, see a recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). Furthermore, similar to, Intellectual Ventures I LLC v. Capital One Bank, the court provided that merely “claiming the improved speed or efficiency inherent with applying the abstract idea on a computer,” does not integrate a judicial exception into a practical application or provide an inventive concept. In this case, the judicial exception is not integrated into a practical application when model and computational efficiency is improved by training a model to learn to solve vehicle selection and space allocation, see applicant’s specification paragraph(s) 0002, 0006, and 0022, since the appending generic computer functionality merely lends to speed or efficiency to the performance of an abstract concept doesn’t meaningfully limit the claim(s) thus as a whole applicant’s limitations merely describe how to generally “apply,” the concept(s) of an existing process of determining and selecting vehicle available spaces for loading packages thus at best are mere instructions to apply the exception. Each of the above limitations simply implement an abstract idea that is no more than mere instructions to apply the exception using a generic computer component, which, is not a practical application of the abstract idea. Therefore, when viewed in combination these additional elements do not integrate the recited judicial exception into a practical application and the claims are directed to the above abstract idea(s). Step 2B: The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as noted previously, the claims as a whole merely describe a field-of-use, generally linking, and how to generally “apply it,” to the abstract idea in a computer environment. Thus, even when viewed as a whole, nothing in the claims adds significantly more (i.e., an inventive concept) to the abstract idea. The claims are ineligible Claim(s) 2-3, 7-11, 14-15, and 19-23: The various metrics of Dependent Claim(s) 2-3, 7-11, 14-15, and 19-23 merely narrow the previously recited abstract idea limitations. For the reasons described above with respect to Independent Claim(s) 1, 6, 13, and 18 these judicial exceptions are not meaningfully integrated into a practical application, or significantly more than an abstract idea. Claim(s) 4 and 16: The additional limitation of “constructing,” “choosing,” “selecting,” and “selecting,” are further directed to a certain method of organizing human activity, as described in Claim(s) 1 and 13. The recitation(s) of “constructing a set of actions for a plurality of states of the environment,” “wherein the set of actions includes at least one of choosing different types of vehicles,” and “selecting two or more vehicles of the particular type, and using a combination of the different types of vehicles,” step(s)/function(s) falls within the enumerated grouping certain methods of organizing human activity. Similar to, Affinity Labs v. DirecTv, the court has held that task to receive, store, or transmit data are additional elements that amount to no more than “applying,” the judicial exception. (MPEP 2106.05(f)). Here, the above additional elements merely constructing, choosing, and selecting, information which is no more than “applying,” the judicial exception. Therefore, for the reasons described above with respect to Claim(s) 4 and 16 the judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. Claim(s) 5, 12, 17, and 24: The additional limitation of “training,” selecting,” and “updating,” are further directed to a certain method of organizing human activity as described in Claims 1, 6, 13, and 18. The Q-learning module and artificial neural network (ANN), respectively, are recited so generically that it represents no more than mere instructions to apply the judicial exception on a computer. The recitation(s) of “at least training, selecting, and updating steps in (c)-(e) are implemented through a module or network,” step(s)/function(s) falls within the enumerated grouping certain methods of organizing human activity. Similar to, Affinity Labs v. DirecTv, the court has held that task to receive, store, or transmit data are additional elements that amount to no more than “applying,” the judicial exception. (MPEP 2106.05(f)). Here, the above additional elements merely training, selecting, and updating, information which is no more than “applying,” the judicial exception. Also, See, a commonplace business method or mathematical algorithm being applied on a general purpose computer, Alice Corp. Pty. Ltd. V. CLS Bank Int’l, 573 U.S. 208, 223, 110 USPQ2d 1976, 1983 (2014); Gottschalk v. Benson, 409 U.S. 63, 64, 175 USPQ 673, 674 (1972); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); and use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more, Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016); MPEP 2106.05(f). Therefore, for the reasons described above with respect to Claim(s) 5, 12, 17, and 24 the judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The dependent claim(s) 2-5, 7-12, 14-17, and 19-24 above do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) in the dependent claim(s) above are no more than mere instructions to apply the exception using generic computer component(s), which, doesn’t provide an inventive concept. Therefore, Claim(s) 1-24 are not patent eligible. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kim (US 2025/0061387 A1). Kim teaches the system can determine the length, width, and height of the package. The system can determine size of the storage space, such as the length, width, and height. The system can also determine the weather that surrounds the storage space. The system can then determine an optimal place to store the package in the vehicle based on comparing the size of the loading space and the size of the package. However, Kim, doesn’t explicitly teach selecting a vehicle of a particular type form a plurality of vehicles using a machine learning model in an environment. Kim, also, doesn’t explicitly teach the machine learning model is trained based on an observation of the environment. Kim, also, doesn’t explicitly teach the observation includes numbers and types of enclosures remaining to be loaded, and the reward is related to a cost of the selected vehicle. Kim, also, doesn’t explicitly teach receiving a current reward from the environment and updating the state, the observation of the environment, and the additional constraints in response to selecting the vehicle. The system will then train the machine learning model based on the current reward from the environment, the updated state, the observation state after selecting, and additional constraints. Kim, also, doesn’t explicitly teach repeating the steps of selecting the vehicle, receiving the current reward, updating the state, the observation, and training the machine learning model based on the current reward, updated state, the observation, and the additional constraints, until the load is accommodated in the space of the vehicle. Kim, also, doesn’t explicitly teach matching three-dimensional coordinates of available space of the plurality of vehicles to sizes and orientations of each object and removing any matching but inaccessible space. Kim, also, doesn’t explicitly teach determining an optimal or near-optimal vehicle using a second amount of computational resources lesser than the first amount of computation resources. Examiner, respectfully, notes that the prior art date fails to predate applicants priority date. Bhat et al. (US 2021/0192429 A1). Bhat et al. teaches a vehicle that includes a cargo space. The vehicles cargo space includes infrared sensors for determining points on the surfaces of items. The system is able to determine an optimal arrangement of the items in the cargo space of the vehicle based on the infrared sensors detecting fixed spatial item positions within the vehicle cargo space. However, Bhat et al., doesn’t explicitly teach selecting a vehicle of a particular type form a plurality of vehicles using a machine learning model in an environment. Bhat et al., also, doesn’t explicitly teach the machine learning model is trained based on an observation of the environment. Bhat et al., also, doesn’t explicitly teach the observation includes numbers and types of enclosures remaining to be loaded, and the reward is related to a cost of the selected vehicle. Bhat et al., also, doesn’t explicitly teach receiving a current reward from the environment and updating the state, the observation of the environment, and the additional constraints in response to selecting the vehicle. The system will then train the machine learning model based on the current reward from the environment, the updated state, the observation state after selecting, and additional constraints. Bhat et al., also, doesn’t explicitly teach repeating the steps of selecting the vehicle, receiving the current reward, updating the state, the observation, and training the machine learning model based on the current reward, updated state, the observation, and the additional constraints, until the load is accommodated in the space of the vehicle. Bhat et al., also, doesn’t explicitly teach matching three-dimensional coordinates of available space of the plurality of vehicles to sizes and orientations of each object and removing any matching but inaccessible space. Bhat et al., also, doesn’t explicitly teach determining an optimal or near-optimal vehicle using a second amount of computational resources lesser than the first amount of computation resources. Grob et al. (US 2023/0214951 A1). Grob et al. teaches selecting an optimal configuration for an asset inside of a vehicle. The system can take into account location (i.e., three-dimensional coordinates along a surface), size, and orientation information of the asset. The system can match the dimensions of the packages to the dimensions of the loading area in each of the configurations. However, Grob et al., doesn’t explicitly teach selecting a vehicle of a particular type form a plurality of vehicles using a machine learning model in an environment. Grob et al., also, doesn’t explicitly teach the machine learning model is trained based on an observation of the environment. Grob et al., also, doesn’t explicitly teach the observation includes numbers and types of enclosures remaining to be loaded, and the reward is related to a cost of the selected vehicle. Grob et al., also, doesn’t explicitly teach receiving a current reward from the environment and updating the state, the observation of the environment, and the additional constraints in response to selecting the vehicle. The system will then train the machine learning model based on the current reward from the environment, the updated state, the observation state after selecting, and additional constraints. Grob et al., also, doesn’t explicitly teach repeating the steps of selecting the vehicle, receiving the current reward, updating the state, the observation, and training the machine learning model based on the current reward, updated state, the observation, and the additional constraints, until the load is accommodated in the space of the vehicle. Grob et al., also, doesn’t explicitly teach matching three-dimensional coordinates of available space of the plurality of vehicles to sizes and orientations of each object and removing any matching but inaccessible space. Grob et al., also, doesn’t explicitly teach determining an optimal or near-optimal vehicle using a second amount of computational resources lesser than the first amount of computation resources. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN A HEFLIN whose telephone number is (571)272-3524. The examiner can normally be reached 7:30 - 5:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeff Zimmerman can be reached at (571) 272-4602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.A.H./Examiner, Art Unit 3628 /MICHAEL P HARRINGTON/Primary Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Oct 26, 2020
Application Filed
Dec 15, 2022
Non-Final Rejection — §101, §112
Feb 27, 2023
Response Filed
Jun 01, 2023
Final Rejection — §101, §112
Sep 05, 2023
Request for Continued Examination
Sep 06, 2023
Response after Non-Final Action
Sep 12, 2023
Non-Final Rejection — §101, §112
Nov 06, 2023
Examiner Interview Summary
Nov 06, 2023
Applicant Interview (Telephonic)
Dec 19, 2023
Response Filed
Dec 29, 2023
Final Rejection — §101, §112
Jun 07, 2024
Request for Continued Examination
Jun 10, 2024
Response after Non-Final Action
Jun 27, 2024
Non-Final Rejection — §101, §112
Oct 02, 2024
Response Filed
Oct 19, 2024
Final Rejection — §101, §112
Jan 27, 2025
Applicant Interview (Telephonic)
Jan 28, 2025
Examiner Interview Summary
Feb 26, 2025
Applicant Interview (Telephonic)
Feb 28, 2025
Examiner Interview Summary
Mar 11, 2025
Request for Continued Examination
Mar 12, 2025
Response after Non-Final Action
Apr 17, 2025
Non-Final Rejection — §101, §112
Sep 23, 2025
Response Filed
Oct 04, 2025
Final Rejection — §101, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586066
FREIGHT MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12567007
AUTOMATED REMOTE TRANSACTIONS BETWEEN A VEHICLE AND A LODGING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12567019
Container Device And Delivery Systems For Using The Same
2y 5m to grant Granted Mar 03, 2026
Patent 12547971
DISPENSING AND TRACKING SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12505404
DIVIDE-AND-CONQUER FRAMEWORK AND MODULARIZED ALGORITHMIC SCHEME FOR LARGE-SCALE OPTIMIZATION
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
41%
Grant Probability
74%
With Interview (+33.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 205 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month