Prosecution Insights
Last updated: April 19, 2026
Application No. 18/276,952

PREDICTION AND PLANNING FOR MOBILE ROBOTS

Final Rejection §101§103
Filed
Aug 11, 2023
Examiner
PATEL, MANGLESH M
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Five AI Limited
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 11m
To Grant
92%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
513 granted / 691 resolved
+22.2% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
31 currently pending
Career history
722
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
38.4%
-1.6% vs TC avg
§102
25.4%
-14.6% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 691 resolved cases

Office Action

§101 §103
DETAILED ACTION This FINAL action is responsive to the amendment filed 12/1/2025. In the amendment Claims 1-20 remain pending. Claims 1, 14-15 and 18-20 are the independent claim. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Withdrawn Rejections 4. The 35 U.S.C. 101 abstract idea rejection of claims 1-17 have been withdrawn in light of the amendment. 5. The 35 U.S.C. 103 rejection of claims 18-20 have been withdrawn in light of the persuasive arguments. Claim Rejections - 35 USC § 101 6. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 7. Claim 18-20 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e. abstract idea) without significantly more. The determination of whether a claim recites patent ineligible subject matter is a 2-step inquiry. STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), see MPEP 2106.03, or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: see MPEP 2106.04 STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? see MPEP 2106.04(II)(A)(1) STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? see MPEP 2106.04(II)(A)(2) and 2106.05(a) thru (d) for explanations. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? see MPEP 2106.05 101 Analysis – Step 1 Claim 18 is directed to “A method…” (process). Claim 19 is directed to “A computer device…” (machine). Claim 20 is directed to “A computer program product…” (composition of matter). Therefore, the claims are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. see MPEP 2106(A)(II)(1) and MPEP 2106.04(a)-(c) Independent claim 18 includes limitations that recite an abstract idea (emphasized below [with the category of abstract idea in brackets]). Furthermore, Independent claims 19-20 recites similar subject matter and are rejected under the same rationale. Claim 18, A method of training a computer implemented behaviour model for predicting actions of an actor vehicle agent in a vehicular scenario, wherein the behaviour model is configured to recognise very low probability events occurring in the vehicular scenario, wherein the training includes: applying input training data to a computer implemented machine learning system [MPEP 2106.05(f) Mere Instructions to Apply an Exception], the training data being sourced from a data set collected in a context in which such very low probability events are the only source of collected data of the data set [MPEP 2106.05(g) Insignificant Extra-Solution Activity, data gathering, pre-solution activity], wherein the computer implemented machine learning system is configured as a classifier [mathematical concept], whereby the trained model recognises such low probability events in the vehicular scene [[MPEP 2106.05(f) Mere Instructions to Apply an Exception]]. The Examiner submits that the foregoing bolded limitation(s) above: constitute a “mathematical concept”. The claims recite using a plurality of agent models, applying a weighting function and configuring the machine learning system has a classifier which fall under mathematical concepts that use probabilistic reasoning involving mathematical and statistical operations. The claims define a mathematical model (classifier) that is defined by its function of recognizing probability-based events. Probability classification is mathematical modeling and falls under mathematical concept. Accordingly, the claim recites at least one abstract idea. 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. see MPEP 2106.04(II)(A)(2) and MPEP 2106.04(d)(2). It must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations”, while the bolded portions continue to represent the “abstract idea”.): For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations of “training data being sourced from a data set”, The Examiner submits that these limitations are insignificant extra-solution activities that merely collect data for use in training a model and comprise data gathering (pre-solution activity). Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the Revised Guidance, the claims does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “applying input training data to a computer” & ”whereby the trained model recognizes”, amounts to nothing more than mere instructions to apply the exception using a generic computer component. For Example, applying training data and model recognition falls under generic application of machine learning using a computer. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. And as discussed above the examiner submits that these limitations are insignificant extra-solution activities. See MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015) in addition to -Collecting information, analyzing it, and displaying certain results of the collection and analysis (Electric Power Group), Collecting data, recognizing certain data within the collected data set and storing the recognized data in memory (Content Extraction). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claims 1-17 remain rejected under 35 U.S.C. 103 as being unpatentable over Sapp (U.S. Pub 2019/0302767, filed Mar. 28, 2018) in view of Rosman (U.S. Pub 2020/0086863, filed Sep. 13, 2018). Regarding Independent claims 1, 14 and 15, Sapp discloses A method implemented by an ego agent in a vehiclular scenario of predicting actions of one or more actor agent in the scenario, the method comprising: for each actor agent using a plurality of agent models to generate a set of candidate futures, each candidate future providing an expected action of the actor agent (see paragraphs 17-34 & abstract, discloses using a temporal prediction model to determine semantic intents of that agent corresponding to candidate trajectories. Further describing in paragraph 63 use of multiple machine learned models to generate the intents corresponding to a candidate trajectory. Further disclosing multiple mechanism [hard-coded rules, heatmaps, ML models] to generate multiple candidate trajectories each corresponding to an expected agent intent/action); applying a weighting function to each candidate future to indicate its relevance in the scenario (see abstract & paragraphs 20, 27 and 35, disclose that the candidate trajectories are associated with weights representing likelihood of performing an intent); selecting for each actor agent a group of candidate futures based on the indicated relevance, wherein the plurality of agent models comprises a first model representing a rational goal directed behaviour inferable from the vehicular scenario, and at least one second model representing an alternate behaviour not inferable from the vehicular scenario (see paragraphs 21-29, discloses selecting for the agent’s different trajectories based on the determined weight. Furthermore, disclosing that the models to determine intents include use of multiple models in paragraph 19. The model includes a first model representing goal directed behavior inferred from a scene such as crosswalk related semantic intents. Intents are generated from observed agent/environment attributes via position, velocity, road geometry classifications which are all grounded in observable scenario data). Planning a driving decision based on the selected group of candidate futures for at least one actor agent (see paragraphs 21-22 and 28-31, discloses planning system 326 using selected candidates trajectories with weights to determine vehicle trajectory via driving decisions); Generating one or more control signals based on the driving decision (see paragraph 65, discloses system controllers 328 generate signals controlling steering, propulsion, braking etc.); and Controlling the behavior of the ego agent based on the one or more control signals (see paragraph 65, discloses vehicle 302 navigated along a trajectory via system controllers commanding drive modules). Sapp supports use of multiple machine learning models to determine the intent of agents for different trajectories. He fails to explicitly disclose support of a model comprising non-inferable behavior of agents. Rosman discloses detecting road agent behaviors that are unobserved (latent aspects) via trained models (see paragraphs 16-17). He further describes in paragraphs 79-80 accounting for latent/unseen influences in the environment. Therefore, he teaches latent agent conditioning that addresses non-inferable influences. Further in paragraph 55 he describes a rational/observable agent model that is a separate layer from the latent/non-inferable model in paragraphs 52, 56 and 57. He then implies in paragraph 52 sampling of latent variables separately from the state/action trajectory updates, thus suggesting operationally distinct components within the overall model. It would have been obvious for one of ordinary skill in the art before the effective filing date of the application to have applied models that account for unobserved/latent aspects of an agent. One motivation has outlined by Rosman in paragraph 74 is to improve predictions in an environment. Regarding Dependent claim 2, with dependency of claim 1, Sapp discloses wherein the step of generating each candidate future is carried out by a prediction component of the ego agent which provides each expected action at a prediction time step (see paragraph 37, including the explanation provided in the independent claim). Regarding Dependent claim 3, with dependency of claim 1, Sapp discloses which comprises transmitting the candidate futures to a planner of the ego agent (see paragraph 12, including the explanation provided in the independent claim). Regarding Dependent claim 4, with dependency of claim 1, Sapp discloses wherein the candidate futures are generated by a joint planner/prediction exploration method (see paragraph 12, including the explanation provided in the independent claim). Regarding Dependent claim 5, with dependency of claim 1, Sapp discloses wherein the step of using the agent models to generate the candidate futures comprises supplying to each agent model a current state of all actor agents in the scenario (see paragraph 22, including the explanation provided in the independent claim). Regarding Dependent claim 6, with dependency of claim 1, Sapp discloses supplying a history of one or more actor agents in the scenario to each agent model, prior to generating the candidate futures (see paragraph 35, including the explanation provided in the independent claim). Regarding Dependent claim 7, with dependency of claim 1, Sapp discloses supplying sensor derived data of the current scenario to each agent model prior to generating the candidate futures (see abstract, including the explanation provided in the independent claim). Regarding Dependent claim 8, with dependency of claim 2, Sapp discloses wherein the prediction time step is a predetermined time ahead of current time when the candidate futures are generated (see paragraph 37, including the explanation provided in the independent claim). Regarding Dependent claim 9, with dependency of claim 1, Sapp discloses wherein the step of generating the candidate futures comprises generating the candidate futures in a given time window (see paragraph 37, including the explanation provided in the independent claim). Regarding Dependent claim 10, with dependency of claim 1, Sapp fails to explicitly disclose support of a model comprising non-inferable behavior of agents. Rosman discloses wherein the at least one second model is selected from at least one of the following agent model types: an agent model type which represents a rational goal directed behaviour based on inadequate or incorrect information about the scenario; an agent model type which represents unexpected actions of an actor agent; or an agent model type which models known or observed driver errors (see paragraphs 16-17). He further describes in paragraphs 79-80 accounting for latent/unseen influences in the environment. It would have been obvious for one of ordinary skill in the art before the effective filing date of the application to have applied models that account for unobserved/latent aspects of an agent. One motivation has outlined by Rosman in paragraph 74 is to improve predictions in an environment. Regarding Dependent claim 11, with dependency of claim 1, Sapp discloses wherein each candidate future is defined as one or more trajectory for the actor agent (see abstract, including the explanation provided in the independent claim). Regarding Dependent claim 12, with dependency of claim 10, Sapp supports application of a probability models to smooth weights between set of candidate trajectories (see paragraph 70). He fails to explicitly teach wherein each candidate future is defined as a raster probability density function. However, it would have been obvious for one of ordinary skill in the art to have applied a variety of probability functions including a raster function to improve agent classification. Regarding Dependent claim 13, with dependency of claim 1, Sapp discloses wherein the step of selecting candidate futures comprises using at least one of a probability score indicating the likelihood of events occurring and a significance factor indicating the significance to the ego agent of resulting outcomes (see abstract, including the explanation provided in the independent claim). Regarding Dependent claim 16, with dependency of claim 14, Sapp discloses when embodied in an on-board computer system of an autonomous vehicle, the autonomous vehicle comprising an on-board sensor system for capturing data comprising information about the environment of the scenario and the state of the actor agents in the environment (see paragraph 53, including the explanation provided in the independent claim). Regarding Dependent claim 17, with dependency of claim 16, Sapp discloses a data processing component configured to implement at least one of localisation, object detecting and object tracking to provide a representation of the environment of the scenario (see paragraph 9, including the explanation provided in the independent claim). It is noted that any citation [[s]] to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. [[See, MPEP 2123]] Response to Arguments 9. Applicant’s arguments filed 12/1/2025 have been considered but are not persuasive regarding the remaining rejections. Applicant Argues: Thus, the language “wherein the computer implemented machine learning system is configured as a classifier,” as recited in claim 18 does not recite a judicial exception in the mathematical concept group. (see pg. 10) The Examiner respectfully disagrees: Even without mentioning gradient descent, the claims define a mathematical model (classifier) that is defined by its function of recognizing probability-based events. Probability classification is mathematical modeling and falls under mathematical concept absent any equations. Applicant Argues: Roman’s description of a model that makes inferences based on a scene does not disclose “at least one second model representing an alternate behavior not inferable from the vehicle scenario” as recited in claim 1. (see pg. 12) The Examiner respectfully disagrees: Roman in paragraph 73 provides an example via ball/child that directly demonstrates non-scene inferable behavior. No amount of analysis of the observable scene via road geometry and/or vehicle sensors would yield the child’s behavior. Romans model is therefore equivalent to the second model that requires representing behavior that is not inferable from the vehicular scenario. In addition, paragraph 52 suggests that the latent component operates as a functionally separable second modelling layer alongside the rational first layer thus teaching the second model. It is not necessary that the references actually suggest, expressly or in so many words the changes or improvements that applicant has made. The test for combining references is what the references as a whole would have suggested to one of ordinary skill in the art. In re Sheckler, 168 USPQ 716 (CCPA 1971); In re McLaughlin 170 USPQ 209 (CCPA 1971); In re Young 159 USPQ 725 (CCPA 1968). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANGLESH M PATEL whose telephone number is (571)272-5937. The examiner can normally be reached on M-F from 10:30 am to 7:30 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin D. Bishop, can be reached at telephone number 571-270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Manglesh M Patel/ Primary Examiner, Art Unit 3665 3/3/2026
Read full office action

Prosecution Timeline

Aug 11, 2023
Application Filed
Jul 26, 2025
Non-Final Rejection — §101, §103
Dec 01, 2025
Response Filed
Mar 04, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599062
YARD MAINTENANCE VEHICLE WITH ADVANCED TILT MONITORING CAPABILITIES
2y 5m to grant Granted Apr 14, 2026
Patent 12589752
VEHICLE SENSOR DATA PROCESSING METHOD AND SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12589852
SHIP STEERING CONTROL DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12565123
VEHICLE SYSTEMS AND CABIN RADAR CALIBRATION METHODS
2y 5m to grant Granted Mar 03, 2026
Patent 12555422
AUTONOMOUS DRIVING SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
92%
With Interview (+18.3%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 691 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month