Prosecution Insights
Last updated: April 19, 2026
Application No. 18/462,970

METHOD FOR USE IN PLANNING OF TASKS TO BE PERFORMED BY A VEHICLE

Final Rejection §101§103
Filed
Sep 07, 2023
Examiner
GILLS, KURTIS
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Volvo Truck Corporation
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
87%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
307 granted / 536 resolved
+5.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
37.5%
-2.5% vs TC avg
§103
42.7%
+2.7% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 536 resolved cases

Office Action

§101 §103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Notice to Applicant In response to the communication received on 12/05/2025, the following is a Final Office Action for Application No. 18462970. Status of Claims Claims 1-12 and 14-15 are pending. Claim 13 is cancelled. Priority As required by M.P.E.P. 201.14(c), acknowledgement is made of applicant’s claim for priority based on: 18462970 filed 09/07/2023 claims foreign priority to 22195161.9, filed 09/12/2022. Response to Amendments Applicant’s amendments have been fully considered. Applicant’s amendments to the claims overcome the 35 U.S.C 112 rejection, and hence the 35 U.S.C. 112 rejection has been withdrawn. Applicant’s amendments to the claims overcome the Claim Interpretation, and hence the Claim Interpretation has been withdrawn. Response to Arguments Applicant’s arguments with respect to the claims have been considered but are moot in light of the updated grounds of rejection, as necessitated by amendment. Arguments that are not moot are as follows: Per the prior art rejection, Applicant argues that the combination of Ebrahimi Afrouzi in view of Cohen is improper. The Examiner respectfully disagrees, and in particular Ebrahimi Afrouzi ’s invention relates to a processor that iteratively completes a full map of the environment based on new sensor data captured by sensors as the wheeled device performs work within the environment and new areas become visible to the sensors and further executing by the wheeled device a movement path to a second position. See at least the abstract of Ebrahimi Afrouzi. Further, Cohen in at least the Abstract states a system for robotic device control and data acquisition, comprising a robotic device control system adapted to receive sensor-based data comprising physical object information, the sensor-based data being received from a plurality of sources. Here, Cohen teaches in line with Ebrahimi Afrouzi since both are related to robotics controls. Cohen enhances the system of Ebrahimi Afrouzi by obtaining site data from at least a dynamic building information models. Thus, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify the processor that iteratively completes a full map of the environment based on new sensor data captured by sensors as the wheeled device performs work within the environment of Ebrahimi Afrouzi to include the use of building information management (BIM) systems of Cohen. Hence, the combination of Ebrahimi Afrouzi modified in view of Cohen is proper. The Supreme Court in KSR International Co. v. Teleflex Inc. identified a number of rationales to support a conclusion of obviousness which are consistent with the proper “functional approach” to the determination of obviousness as laid down in Graham. Exemplary rationales that may support a conclusion of obviousness include: (A) Combining prior art elements according to known methods to yield predictable results; (B) Simple substitution of one known element for another to obtain predictable results; (C) Use of known technique to improve similar devices (methods, or products) in the same way; (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E) “Obvious to try” – choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of ordinary skill in the art; (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. Note that the list of rationales provided is not intended to be an all-inclusive list. Other rationales to support a conclusion of obviousness may be relied upon by Office personnel. See MPEP §2143 for Examples of Basic Requirements of a Prima Facie Case of Obviousness. Although the suggestion or motivation from the prior art is indeed one of the rationales that can be used in supporting a conclusion of obviousness (rationale G), it is not the sole rationale that can be applied, nor a requirement; as listed above, additional rationales may be used to support an examiner's conclusion of obviousness. For the reasons detailed above, Examiner is not persuaded that the claims are patentably distinguishable over the Ebrahimi Afrouzi in view of Cohen disclosure. Rather, Examiner maintains that the Ebrahimi Afrouzi in view of Cohen combination renders obvious the claimed invention. Accordingly, the previous prior art rejection is maintained. As per the 101 rejection, Applicant argues that the claims are in favor of eligibility per Prong One of Step 2A, however Examiner respectfully disagrees. Per Prong One of Step 2A, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity. Particularly, the identified recitation falls within the Mental Processes including concepts performed in the human mind (including an observation, evaluation judgment, opinion) and/or Certain Methods of Organizing Human Activity including managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules of instructions). Since the recitation of the claims falls into at least one of the above Groupings, there is a basis for providing further analysis with regard to Prong Two of Step 2A to determine whether the recitation of an abstract idea is deduced to being directed to an abstract idea. Thus, the rejection is maintained. Applicant argues that the claims are in favor of eligibility per Prong Two of Step 2A, however Examiner respectfully disagrees. Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The non-transitory computer readable medium, vehicle, computer, and/or electronic control unit is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing/transmitting data. This generic processor server limitation is no more than mere instructions to apply the exception using a generic computer component. Further, non-transitory computer readable medium, vehicle, computer, and/or electronic control unit to inter alia perform the function of generating site update data for updating the dynamic building information model based on the user input received via the at least one user interface in connection with meaningful limits on practicing the abstract idea. In other words, the present claims use a generic processing device and memory medium to inter alia perform the function of generating site update data for updating the dynamic building information model based on the user input received via the at least one user interface in connection with Applicant argues that the claims are in favor of eligibility per Step 2B, however Examiner respectfully disagrees. Therein, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of: non-transitory computer readable medium, vehicle, computer, and/or electronic control unit. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, non-transitory computer readable medium, vehicle, computer, and/or electronic control unit to inter alia perform the function of generating site update data for updating the dynamic building information model based on the user input received via the at least one user interface in connection with Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include the non-limiting or non-exclusive examples of MPEP § 2106.05. Thus, the rejection is maintained. In an effort to further expedite prosecution, see: Appendix 1 to the October 2019 Update: Subject Matter Eligibility, Life Sciences & Data Processing Examples, October 2019 30, Example 46. Livestock Management. Per claim 1 of Example 46, the memory, display and processor are recited so generically (no details whatsoever are provided other than that they are a memory, display and processor) that they represent no more than mere instructions to apply the judicial exception on a computer. These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer. As an exemplary direction for similar claim limitations to be eligible, see claims 2-4 of Example 46. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 and 14-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claims fall within statutory class of process or machine or manufacture; hence, the claims fall under statutory category of Step 1. Step 2 is the two-part analysis from Alice Corp. (also called the Mayo test). The 2019 PEG makes two changes in Step 2A: It sets forth new procedure for Step 2A (called “revised Step 2A”) under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. The two-prong inquiry is as follows: Prong One: evaluate whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). If claim recites an exception, then Prong Two: evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception. The claim(s) recite(s) the following abstract idea indicated by non-boldface font and additional limitations indicated by boldface font: A computer-implemented method for use in planning of tasks to be performed by one or more vehicles within an evolving site, the method comprising: - obtaining a request relating to at least one task to be performed by the one or more vehicles,wherein the at least one task involves at least travelling from a starting location to a target location within the evolving site,- obtaining geographic location data, wherein the geographic location data comprises at least the starting location and the target location of the one or more vehicles, - obtaining site data from at least a dynamic building information model, BIM, of the evolving site, - planning of the at least one task by using at least the request, the geographic location data,and the site data as input data to a prediction model, wherein the planning of the at least one task comprises at least planning of a travel path within the evolving site from the starting location to the target location,-providing information relating to the planned at least one task to be presented via at least one user interface, - receiving, via the at least one user interface and in connection with the performance and/or completion of the planned at least one task, user input indicating a real-world change, status, or outcome related to execution of the task,- generating site update data for updating the dynamic building information model based on the user input received via the at least one user interface in connection with the performance and/or completion of the planned at least one task, wherein the site update data comprises information reflecting the actual status, outcome, or deviation of the performed task. [or] A non-transitory computer readable medium carrying a computer program comprising program code for,when said program code is run on a computer,performing: obtaining a request relating to at least one task to be performed by the one or more vehicles,wherein the at least one task involves at least travelling from a starting location to a target location within the evolving site,obtaining geographic location data, wherein the geographic location data comprises at least the starting location and the target location of the one or more vehicles,obtaining site data from at least a dynamic building information model, BIM, of the evolving site,planning of the at least one task by using at least the request, the geographic location data,and the site data as input data to a prediction model, wherein the planning of the at least one task comprises at least planning of a travel path within the evolving site from the starting location to the target location,providing information relating to the planned at least one task to be presented via at least one user interface,receiving, via the at least one user interface and in connection with the performance and/or completion of the planned at least one task, user input indicating a real-world change, status, or outcome related to execution of the task,generating site update data for updating the dynamic building information model based on the user input received via the at least one user interface in connection with the performance and/or completion of the planned at least one task, wherein the site update data comprises information reflecting the actual status, outcome, or deviation of the performed task. [or] A control system for planning of tasks to be performed by one or more vehicles within an evolving site,- the control system comprising one or more electronic control units;at least one user interface;at least one memory;wherein the one or more electronic control units are operatively coupled to the memory and the at least one user interface and are configured to:- obtain a request relating to at least one task to be performed by the one or more vehicles,wherein the at least one task involves at least travelling from a starting location to a target location within the evolving site:- obtain geographic location data, wherein the geographic location data comprises at least the starting location and the target location of the one or more vehicles;- obtain site data from at least a dynamic building information model (BIM) of the evolving site:- plan the at least one task by using at least the request, the geographic location data, andthe site data as input data to a prediction model, wherein the planning of the at least one task comprises at least planning of a travel path within the evolving site from the starting location to the target location;- provide information relating to the planned at least one task to be presented via the at least one user interface:- receive, via the at least one user interface and in connection with the performance and/or completion of the planned at least one task, user input indicating a real-world change, status,or outcome related to execution of the task:- generate site update data for updating the dynamic building information model based on the user input received in connection with the performance and/or completion of the planned at least one task, wherein the site update data comprises information reflecting the actual status, outcome, or deviation of the performed task. The claim(s) recite(s) the following summarization of the abstract idea which includes planning of tasks to be performed by one or more vehicles within an evolving site executed by the additional element(s) of non-transitory computer readable medium, vehicle, computer, and electronic control unit. This falls into at least the Abstract Idea Grouping of Mental Processes since the information can be analyzed by an abstract evaluation judgment process. Thus, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity since the identified recitation falls within the Mental Processes including concepts performed in the human mind (including an observation, evaluation judgment, opinion). Per Prong One of Step 2A, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity. Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The non-transitory computer readable medium, vehicle, computer, and/or electronic control unit is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing/transmitting data. This generic non-transitory computer readable medium, vehicle, computer, and/or electronic control unit limitation is no more than mere instructions to apply the exception using a generic computer component. Further, generating site update data by a non-transitory computer readable medium, vehicle, computer, and/or electronic control unit is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, since the claims are directed to the determined judicial exception in view of the two prongs of Step 2A, the 2019 PEG flowchart is directed to Step 2B. Per Step 2B, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of: non-transitory computer readable medium, vehicle, computer, and electronic control unit. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, generating site update data by a non-transitory computer readable medium, vehicle, computer, and/or electronic control unit is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic computer/memory type structure at Page 2 wherein “implemented through a processor or one or more processors, such as a processor 460 of a processing circuitry in the control system 130.” Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); PNG media_image1.png 18 19 media_image1.png Greyscale ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); PNG media_image1.png 18 19 media_image1.png Greyscale iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or PNG media_image1.png 18 19 media_image1.png Greyscale v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine/manufacture for performing the present claims); and receiving or transmitting data (e.g., the present claims). The dependent claims do not cure the above stated deficiencies, and in particular, the dependent claims further narrow the abstract idea without reciting additional elements that integrate the exception into a practical application of the exception or providing significantly more than the abstract idea. Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Thus, viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-12 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi Afrouzi et al. (US 20220187841 A1) hereinafter referred to as Ebrahimi Afrouzi in view of Cohen et al. (US 20150273693 A1) hereinafter referred to as Cohen. Ebrahimi Afrouzi teaches: Claim 1. A computer-implemented method for use in planning of tasks to be performed by one or more vehicles within an evolving site, the method comprising: - obtaining a request relating to at least one task to be performed by the one or more vehicles, wherein the at least one task involves at least travelling from a starting location to a target location within the evolving site (¶1009 In embodiments, a first collaborative SLAM robot may observe the environment from a starting time and has a map from time zero to time n that provides partial visibility to the environment. The first robot may not observe a world of a second robot that has a different geographic area and a different starting time that may not necessarily be simultaneous with the world of the for robot. Once the collaboration starts between the two robots, the processor of each robot deals with two sets of reference frames of space and time, their own and that of their collaborator. To track relations between these universes, a fifth dimension is required ¶1130 As the robot navigates within the environment, the processor creates a map based on confirmed spaces. The robot may follow the perimeters of the obstacles it encounters and other geometries to find and cover spaces that may have possibly been missed. When finished coverage, the robot may go back to the starting point. This process is illustrated in the flow chart of FIG. 253. ¶1250 In some embodiments, via the user interface or automatically without user input, a starting and an ending point for a path to be traversed by the robot may be indicated on the user interface of the application executing on the communication device ¶1300 the system of the robot may examine the application layer of the Open Systems Interconnection (OSI) model to search for signatures or anomalies. In some embodiments, the system of the robot may filter based on source address and destination address. In some embodiments, the system of the robot may use a simpler approach, such as packet filtering, state filtering, and such.), - obtaining geographic location data, wherein the geographic location data comprises at least the starting location and the target location of the one or more vehicles (¶1250 In some embodiments, via the user interface or automatically without user input, a starting and an ending point for a path to be traversed by the robot may be indicated on the user interface of the application executing on the communication device ¶1441 In some embodiments, the robot may encounter stains on the floor during a working session. In some embodiments, different stains (e.g., material composition of stain, size of stain, etc.) on the floor may require varying levels of cleaning intensity to remove the stain from, for example, a hardwood floor. In some embodiments, the robot may encounter debris on floors. In some embodiments, debris may be different for each encounter (e.g., type of debris, amount of debris, etc.). In some embodiments, these encounters may be divided into categories (e.g., by amount of debris accumulation encountered or by size of stain encountered or by type of debris or stain encountered). In some embodiments, each category may occur at different frequencies in different locations within the environment. For example, the robot may encounter a large amount of debris accumulation at a high frequency in a particular area of the environment. In some embodiments, the processor of the robot may record such frequencies for different areas of the environment during various work sessions and determine patterns related to stains and debris accumulation based on the different encounters.), - obtaining site data from at least a dynamic building information model, BIM, of the evolving site (¶1441 In some embodiments, the robot may encounter stains on the floor during a working session. In some embodiments, different stains (e.g., material composition of stain, size of stain, etc.) on the floor may require varying levels of cleaning intensity to remove the stain from, for example, a hardwood floor. In some embodiments, the robot may encounter debris on floors. In some embodiments, debris may be different for each encounter (e.g., type of debris, amount of debris, etc.). In some embodiments, these encounters may be divided into categories (e.g., by amount of debris accumulation encountered or by size of stain encountered or by type of debris or stain encountered). In some embodiments, each category may occur at different frequencies in different locations within the environment. For example, the robot may encounter a large amount of debris accumulation at a high frequency in a particular area of the environment. In some embodiments, the processor of the robot may record such frequencies for different areas of the environment during various work sessions and determine patterns related to stains and debris accumulation based on the different encounters. ¶1370 With combined SLAM and AR, location of objects observed by the camera may be saved within the map generated using SLAM techniques. This may be helpful in situations where areas may be off-limits, such as in construction sites. For example, a user may insert an off-limit area in a live video feed using an application displaying the live video feed. The off-limit area may then be saved to a map of the environment such that its position is known. In another example, a civil engineer may remotely insert notes associated with different areas of the environment as they are shown on the live video feed ¶1443 In some embodiments, the processor executes a decision-making loop comprising of observation and actuation in response to the observation. Examples of actuations include decreasing speed, decreasing the distance between parallel lines in the robot path to increase overlap coverage by the robot, repeating treatment or cleaning of an area or entire environment, scheduling a next cleaning earlier, creating a detour to provide extra attention to an area, or a combination of some or all of these actuations. FIG. 569 illustrates a decision-making loop 56900 and examples of actuations 56901. Given an area is observed include higher levels of debris accumulation, the processor may actuate the robot to drive slower or may decrease the distance between parallel lines of a boustrophedon path of the robot.), - planning of the at least one task by using at least the request, the geographic location data, and the site data as input data to a prediction model, wherein the planning of the at least one task comprises at least planning of a travel path within the evolving site from the starting location to the target location (¶1439 the system of the robot determines whether the obstacle may move a little if bumped by the robot (e.g., a chair with caster wheels) and if the movement of the obstacle results in one or more other obstacles moving, such as a fixed table 56200 next to a wheeled chair 56201 illustrated in FIG. 562. In embodiments, the system of the robot determines if a trap-like area exists, such as the area 56300 under chair 56301 illustrated in FIG. 563. The robot may avoid such trap-like areas in future runs. In some embodiments, the system of the robot may inflate obstacles that have previously or are predicted to cause the robot to become stuck. In embodiments, the system may increase or decrease an inflation rate of the obstacle in following runs ¶1441 In some embodiments, the robot may encounter stains on the floor during a working session. In some embodiments, different stains (e.g., material composition of stain, size of stain, etc.) on the floor may require varying levels of cleaning intensity to remove the stain from, for example, a hardwood floor. In some embodiments, the robot may encounter debris on floors. In some embodiments, debris may be different for each encounter (e.g., type of debris, amount of debris, etc.). ¶1443 In some embodiments, the processor executes a decision-making loop comprising of observation and actuation in response to the observation. Examples of actuations include decreasing speed, decreasing the distance between parallel lines in the robot path to increase overlap coverage by the robot, repeating treatment or cleaning of an area or entire environment, scheduling a next cleaning earlier, creating a detour to provide extra attention to an area, or a combination of some or all of these actuations. FIG. 569 illustrates a decision-making loop 56900 and examples of actuations 56901. Given an area is observed include higher levels of debris accumulation, the processor may actuate the robot to drive slower or may decrease the distance between parallel lines of a boustrophedon path of the robot.), - providing information relating to the planned at least one task to be presented via at least one user interface (¶1444 In some embodiments, observations captured by sensors of the robot may be visualized by a user using an application of a communication device. For instance, a stain observed by sensors of the robot at a particular location may be displayed in a map of the environment at the particular location it was observed. In some embodiments, stains observed in previous work sessions are displayed in a lighter shade and stain observed during a current work session are displayed in a darker shade. This allows a user to visualize areas in which stains are often observed and currently observed. FIG. 570 illustrates an observation and visualization loop 57000 and an application 57001 of a communication device 57002 displaying a stain 57003 in a map 57004 observed at different times.), - receiving, via the at least one user interface and in connection with the performance and/or completion of the planned at least one task, user input indicating a real-world change, status, or outcome related to execution of the task (¶1133 the processor of the robot uses room detection to perform work in one room at a time. In some embodiments, the processor determines a logical segmentation of rooms based on any of sensor data and user input received by the application designating rooms in the map. In some embodiments, rooms segmented by the processor or the user using the application are different shapes and sizes and are not limited to being a rectangular shape); - generating site update data for updating the dynamic building information model based on the user input received via the at least one user interface in connection with least one task, wherein the site update data comprises information reflecting the actual status, outcome, or deviation of the performed task (¶1133 the processor of the robot uses room detection to perform work in one room at a time. In some embodiments, the processor determines a logical segmentation of rooms based on any of sensor data and user input received by the application designating rooms in the map. In some embodiments, rooms segmented by the processor or the user using the application are different shapes and sizes and are not limited to being a rectangular shape ¶1094 When the processor updates a single state x in the chain to x′ the processor obtains P.sup.(t+1)(x)=∈P.sup.(t)(x)T(x′|x), wherein P is the distribution over possible outcomes. The chain definition may allow the processor to compute derivatives and Jacobians and at a same time take advantage of sparsification. ¶1443 In some embodiments, the processor executes a decision-making loop comprising of observation and actuation in response to the observation. Examples of actuations include decreasing speed, decreasing the distance between parallel lines in the robot path to increase overlap coverage by the robot, repeating treatment or cleaning of an area or entire environment, scheduling a next cleaning earlier, creating a detour to provide extra attention to an area, or a combination of some or all of these actuations. FIG. 569 illustrates a decision-making loop 56900 and examples of actuations 56901. Given an area is observed include higher levels of debris accumulation, the processor may actuate the robot to drive slower or may decrease the distance between parallel lines of a boustrophedon path of the robot.). Although not explicitly taught by Ebrahimi Afrouzi, Cohen teaches in the analogous art of enhanced system for control of robotic devices: obtaining site data from at least a dynamic building information model, BIM (¶0059 FIG. 8 is an architectural diagram of an exemplary system 800, according to an embodiment of the invention. Architectural layer 801 is the System Desktop, containing modules 802a-n, including Desktop Mission Planning, Data Products, Analytics & Tools, Data Storage & Management, Sensor & Rover Optimization, and I/O Data Tools. Within exemplary Internet cloud 810 could reside any and multiple types of computer-aided design (CAD) and/or building information management (BIM) systems. ¶0060 FIG. 9 shows an exemplary process 900 for preparation for a mission, according to an embodiment of the invention. In this process typically parameters for each mission are defined, such as starting position, ending position, average distance from structure, average speed, pass separation, distance between passes for longer-range sensors, type of vehicle, maximum altitude, movement range and time, acceleration vectors, and braking vectors. In step 901 external data is loaded from, for example, a CAD or BIM system. In some cases, typically when no useable BIM or CAD data exists, photometric data, taken from snap shots of an object and used to create a rough two-dimensional or three-dimensional map of the object may be used as the primary data for the mission ¶0070 FIG. 15 shows an overview of the association of data types 1500, including BIM/CAD models, sensor data, and LiDAR points, for rendering according to an embodiment of the invention). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the enhanced system for control of robotic devices of Cohen with the system for lightweight simultaneous localization and mapping performed on a real-time computing and battery operated wheeled device of Ebrahimi Afrouzi for the following reasons: (1) a finding that there was some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings, e.g. Ebrahimi Afrouzi ¶0004 teaches that it is important robotic devices to use real time platforms as their functionalities are not equivalent to PCs; (2) a finding that there was reasonable expectation of success since the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference, e.g. Ebrahimi Afrouzi Abstract teaches that the processor iteratively completes a full map of the environment based on new sensor data captured by sensors as the wheeled device performs work within the environment and new areas become visible to the sensors, and Cohen ¶0059 teaches use of building information management (BIM) systems; and (3) whatever additional findings based on the Graham factual inquiries may be necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness, e.g. Ebrahimi Afrouzi at least the above cited paragraphs, and Cohen at least the inclusively cited paragraphs. Therefore, it would be obvious to one skilled in the art at the time of the invention to combine the enhanced system for control of robotic devices of Cohen with the system for lightweight simultaneous localization and mapping performed on a real-time computing and battery operated wheeled device of Ebrahimi Afrouzi. The rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and whether there would have been a reasonable expectation of success in doing so." DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006). See MPEP 2143(G). Ebrahimi Afrouzi teaches: Claim 2. The method according to claim 1, wherein the at least one task comprises charging of an electric energy storage system, ESS, of the one or more vehicles, and wherein the method further comprises: - obtaining ESS data comprising information relating to a state-of-charge of the ESS, - obtaining charging station data comprising information relating to a location and a status of at least one charging station within the evolving site, wherein the input data used for planning of the at least one task comprises the obtained ESS data and charging station data (¶0894 In some embodiments, the processor may add different types of information to the map of the environment. For example, FIG. 71 illustrates four different types of information that may be added to the map, including an identified object such as a sock 7100, an identified obstacle such as a glass wall 7101, an identified cliff such as a staircase 7102, and a charging station of the robot 7103. The processor may identify an object by using a camera to capture an image of the object and matching the captured image of the object against a library of different types of objects. The processor may detect an obstacle, such as the glass wall 7101, using data from a TOF sensor or bumper. The processor may detect a cliff, such as staircase 7102, by using data from a camera, TOF, or other sensor positioned underneath the robot in a downwards facing orientation. The processor may identify the charging station 7103 by detecting IR signals emitted from the charging station 7103). Ebrahimi Afrouzi teaches: Claim 3. The method according to claim 1, wherein the at least one task comprises loading and/or unloading of goods (¶1345 a robot may autonomously deliver items purchased by a user, such as food, groceries, clothing, electronics, sports equipment, etc., to the curbside of a store, a particular parking spot, a collection point, or a location specified by the user. In some cases, the user may use an application of a communication device to order and pay for an item and request pick-up (e.g., curbside) or delivery of the item (e.g., to a home of the user). In some cases, the user may choose the time and day of pick-up or delivery using the application. In the case of groceries, the robot may be a smart shopping cart and the shopping cart may autonomously navigate to a vehicle of the user for loading into their vehicle. Or, an autonomous robot may connect to a shopping cart through a connector, such that the robot may drive the shopping cart to a vehicle of a customer or a storage location). Ebrahimi Afrouzi teaches: Claim 4. The method according to claim 3, wherein the method further comprises: - obtaining vehicle data relating to at least a vehicle weight and/or a vehicle payload capacity of the one or more vehicles, wherein the vehicle data is used as input data for the planning of the loading and/or unloading of goods (¶1345 a robot may autonomously deliver items purchased by a user, such as food, groceries, clothing, electronics, sports equipment, etc., to the curbside of a store, a particular parking spot, a collection point, or a location specified by the user. In some cases, the user may use an application of a communication device to order and pay for an item and request pick-up (e.g., curbside) or delivery of the item (e.g., to a home of the user). In some cases, the user may choose the time and day of pick-up or delivery using the application. In the case of groceries, the robot may be a smart shopping cart and the shopping cart may autonomously navigate to a vehicle of the user for loading into their vehicle. Or, an autonomous robot may connect to a shopping cart through a connector, such that the robot may drive the shopping cart to a vehicle of a customer or a storage location). Ebrahimi Afrouzi teaches: Claim 5. The method according to claim 3, wherein the planning of the loading and/or unloading of goods comprises determining a loading and/or unloading location as the target location based on the site data (¶1345 a robot may autonomously deliver items purchased by a user, such as food, groceries, clothing, electronics, sports equipment, etc., to the curbside of a store, a particular parking spot, a collection point, or a location specified by the user. In some cases, the user may use an application of a communication device to order and pay for an item and request pick-up (e.g., curbside) or delivery of the item (e.g., to a home of the user). In some cases, the user may choose the time and day of pick-up or delivery using the application. In the case of groceries, the robot may be a smart shopping cart and the shopping cart may autonomously navigate to a vehicle of the user for loading into their vehicle. Or, an autonomous robot may connect to a shopping cart through a connector, such that the robot may drive the shopping cart to a vehicle of a customer or a storage location). Ebrahimi Afrouzi teaches: Claim 6. The method according to claim 1, wherein the site data obtained from the BIM comprises data relating to one or more of the following: - boundaries of the evolving site, - coordinates and dimensions of features/structures within the evolving site, - material stocks within the evolving site, - materials used for construction within the evolving site, - available storage space within the evolving site, - status of equipment located within the evolving site, - manpower within the evolving site, - a time log of the evolving site. (¶1441 In some embodiments, the robot may encounter stains on the floor during a working session. In some embodiments, different stains (e.g., material composition of stain, size of stain, etc.) on the floor may require varying levels of cleaning intensity to remove the stain from, for example, a hardwood floor. In some embodiments, the robot may encounter debris on floors. In some embodiments, debris may be different for each encounter (e.g., type of debris, amount of debris, etc.). In some embodiments, these encounters may be divided into categories (e.g., by amount of debris accumulation encountered or by size of stain encountered or by type of debris or stain encountered). In some embodiments, each category may occur at different frequencies in different locations within the environment. For example, the robot may encounter a large amount of debris accumulation at a high frequency in a particular area of the environment. In some embodiments, the processor of the robot may record such frequencies for different areas of the environment during various work sessions and determine patterns related to stains and debris accumulation based on the different encounters.) Although not explicitly taught by Ebrahimi Afrouzi, Cohen teaches in the analogous art of enhanced system for control of robotic devices: site data obtained from the BIM (¶0059 FIG. 8 is an architectural diagram of an exemplary system 800, according to an embodiment of the invention. Architectural layer 801 is the System Desktop, containing modules 802a-n, including Desktop Mission Planning, Data Products, Analytics & Tools, Data Storage & Management, Sensor & Rover Optimization, and I/O Data Tools. Within exemplary Internet cloud 810 could reside any and multiple types of computer-aided design (CAD) and/or building information management (BIM) systems. ¶0060 FIG. 9 shows an exemplary process 900 for preparation for a mission, according to an embodiment of the invention. In this process typically parameters for each mission are defined, such as starting position, ending position, average distance from structure, average speed, pass separation, distance between passes for longer-range sensors, type of vehicle, maximum altitude, movement range and time, acceleration vectors, and braking vectors. In step 901 external data is loaded from, for example, a CAD or BIM system. In some cases, typically when no useable BIM or CAD data exists, photometric data, taken from snap shots of an object and used to create a rough two-dimensional or three-dimensional map of the object may be used as the primary data for the mission ¶0070 FIG. 15 shows an overview of the association of data types 1500, including BIM/CAD models, sensor data, and LiDAR points, for rendering according to an embodiment of the invention). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the enhanced system for control of robotic devices of Cohen with the system for lightweight simultaneous localization and mapping performed on a real-time computing and battery operated wheeled device of Ebrahimi Afrouzi for the following reasons: (1) a finding that there was some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings, e.g. Ebrahimi Afrouzi ¶0004 teaches that it is important robotic devices to use real time platforms as their functionalities are not equivalent to PCs; (2) a finding that there was reasonable expectation of success since the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference, e.g. Ebrahimi Afrouzi Abstract teaches that the processor iteratively completes a full map of the environment based on new sensor data captured by sensors as the wheeled device performs work within the environment and new areas become visible to the sensors, and Cohen ¶0059 teaches use of building information management (BIM) systems; and (3) whatever additional findings based on the Graham factual inquiries may be necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness, e.g. Ebrahimi Afrouzi at least the above cited paragraphs, and Cohen at least the inclusively cited paragraphs. Therefore, it would be obvious to one skilled in the art at the time of the invention to combine the enhanced system for control of robotic devices of Cohen with the system for lightweight simultaneous localization and mapping performed on a real-time computing and battery operated wheeled device of Ebrahimi Afrouzi. The rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and whether there would have been a reasonable expectation of success in doing so." DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006). See MPEP 2143(G). Ebrahimi Afrouzi teaches: Claim 7. The method according to claim 1, further comprising:- based on the obtained request, selecting a subset of the site data to be used as input data for the planning of the at least one task, preferably wherein selecting the subset of the site data is performed based on a previously established correlation between the at least one task and the selected subset of the site data (¶1443 In some embodiments, the processor executes a decision-making loop comprising of observation and actuation in response to the observation. Examples of actuations include decreasing speed, decreasing the distance between parallel lines in the robot path to increase overlap coverage by the robot, repeating treatment or cleaning of an area or entire environment, scheduling a next cleaning earlier, creating a detour to provide extra attention to an area, or a combination of some or all of these actuations. FIG. 569 illustrates a decision-making loop 56900 and examples of actuations 56901. Given an area is observed include higher levels of debris accumulation, the processor may actuate the robot to drive slower or may decrease the distance between parallel lines of a boustrophedon path of the robot. ¶1444 In some embodiments, observations captured by sensors of the robot may be visualized by a user using an application of a communication device. For instance, a stain observed by sensors of the robot at a particular location may be displayed in a map of the environment at the particular location it was observed. In some embodiments, stains observed in previous work sessions are displayed in a lighter shade and stain observed during a current work session are displayed in a darker shade. This allows a user to visualize areas in which stains are often observed and currently observed. FIG. 570 illustrates an observation and visualization loop 57000 and an application 57001 of a communication device 57002 displaying a stain 57003 in a map 57004 observed at different times.). Ebrahimi Afrouzi teaches: Claim 8. The method according to claim 1, wherein obtaining the request relating to the at least one task comprises receiving a user request via the at least one user interface (¶1443 In some embodiments, the processor executes a decision-making loop comprising of observation and actuation in response to the observation. Examples of actuations include decreasing speed, decreasing the distance between parallel lines in the robot path to increase overlap coverage by the robot, repeating treatment or cleaning of an area or entire environment, scheduling a next cleaning earlier, creating a detour to provide extra attention to an area, or a combination of some or all of these actuations. FIG. 569 illustrates a decision-making loop 56900 and examples of actuations 56901. Given an area is observed include higher levels of debris accumulation, the processor may actuate the robot to drive slower or may decrease the distance between parallel lines of a boustrophedon path of the robot. ¶1444 In some embodiments, observations captured by sensors of the robot may be visualized by a user using an application of a communication device. For instance, a stain observed by sensors of the robot at a particular location may be displayed in a map of the environment at the particular location it was observed. In some embodiments, stains observed in previous work sessions are displayed in a lighter shade and stain observed during a current work session are displayed in a darker shade. This allows a user to visualize areas in which stains are often observed and currently observed. FIG. 570 illustrates an observation and visualization loop 57000 and an application 57001 of a communication device 57002 displaying a stain 57003 in a map 57004 observed at different times.). Ebrahimi Afrouzi teaches: Claim 9. The method according to claim 1, wherein the planning of the at least one task comprises predicting an evolvement of the evolving site based on at least historical site data, and optionally on user input received via the at least one user interface (¶1008 In embodiments, the robot, a headset of a player, and a standalone observing camera, may each have a local frame of reference in which they perceive the environment. In such a case, six dimensions may account for space and one dimension may account for time for each of the device. Internally, each device may have a set of coordinates, such as epipolar, to resolve intrinsic geometric relations and perceptions of their sensors. When the perceptions captured from these frames of reference of the three devices are integrated, the loop is closed and all errors accounted for, a global map emerges. The global may theoretically be a spatial model (e.g., including time, motion, events, etc.) of the real world. In embodiments, the six dimensions are ignored and three dimensions of space are assigned to each of the devices in addition to time to show how the data evolves over a sequence of positions of the device. FIGS. 159A and 159B illustrates two tennis courts 15900 in two different time zones with proxy robots 15900 an 15901 facilitating a remote tennis game against human players 15902 and 15903. Each robot may move in three dimensions of space (x, y, z) and has one dimension of time. Once the robots collaborate to facilitate the remote tennis game, each robot must process or understand two frames of reference of space and time.). Ebrahimi Afrouzi teaches: Claim 10. The method according to claim 1, wherein generating site update data for updating the dynamic building information model is further performed based on geographic location data collected by the one or more vehicles during performance of the at least one task (¶0932 In some embodiments, the processor trains a camera based system. For example, a robot may include a camera bundled with one or more of an OTS, encoder, IMU, gyro, one point narrow range TOF sensor, etc., and a three- or two-dimension LIDAR for measuring distances as the robot moves. FIG. 111A illustrates a robot 11100 including a camera 11101, LIDAR 11102, and one or more of an OTS, encoder, IMU, gyro, and one point narrow range TOF sensor. 11103 is a database of LIDAR readings which represent ground truth. 11104 is a database of sensor readings taken by the one or more of OTS, encoder, IMU, gyro, and one point narrow range TOF sensor. The processor of the robot 11100 may associate the readings of database 11103 and 11104 to obtain associated data 11105 and derive a calibration. In some embodiments, the processor compares the resulting calibration with the bundled camera data and sensor data (taken by the one or more of OTS, encoder, IMU, gyro, and one point narrow range TOF sensor) 11106 after training and during runtime until convergence and patterns emerge. Using two or more cameras or one camera and a point measurement may improve results.). Ebrahimi Afrouzi teaches: Claim 11. The method according to claim 1, further comprising: obtaining information relating to a user category of a user, wherein the information relating to the planned at least one task is selectively provided for presentation via the at least one user interface, depending on the user category (¶1069 In some embodiments, the processor uses Bayesian methods in classifying objects. In some embodiments, the processor defines a state space including all possible categories an object could possibly belong to, each state of the state space corresponding with a category. In reality, an object may be classified into many categories, however, in some embodiments, only certain classes may be defined. In some embodiments, a class may be expanded to include an “other” state. In some embodiments, the processor may assign an identified feature to one of the defined states or an “other” state of a state.). Ebrahimi Afrouzi teaches: Claim 12. The method according to claim 1, wherein, when the user is a vehicle operator, providing information relating to the planned at least one task comprises providing information that enables presentation of at least the planned travel path to the vehicle operator (¶1027 FIGS. 171B and 171C illustrate the object 17100 installed at the end of aisles 17101 and a path 17102 of the robot during a first few runs. The robot is pushed by a human operator along a path 17102 during which sensors of the robot observe the environment, including landmark objects 17100, such that they may learn the path 17102 and execute it autonomously in later work sessions. In future work sessions, the processor may understand a location of the robot and determine a next move of the robot upon sensing the presence of the object 17100. In FIGS. 172A and 172B, the human operator may alternatively use an application of a communication device 17103 to draw the path of the robot 17102 in a displayed map 17104.). As per claims 14, 15, the computer program, non-transitory computer readable medium, control system tracks the method of claims 1, 1, respectively, resulting in substantially similar limitations. The same cited prior art and rationale of claims 1, 1 are applied to claims 14, 15, respectively. Ebrahimi Afrouzi discloses that the embodiment may be found as a computer program, non-transitory computer readable medium, control system (Figs. 1-4 and ¶1456). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KURTIS GILLS whose telephone number is (571)270-3315. The examiner can normally be reached on M-F 8-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on 5712726787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KURTIS GILLS/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Sep 07, 2023
Application Filed
Sep 04, 2025
Non-Final Rejection — §101, §103
Dec 05, 2025
Response Filed
Feb 10, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602664
INTELLIGENT MEETING TIMESLOT ANALYSIS AND RECOMMENDATION
2y 5m to grant Granted Apr 14, 2026
Patent 12572864
AVOIDING PROHIBITED SEQUENCES OF MATERIALS PROCESSING AT A CRUSHER USING PREDICTIVE ANALYTICS
2y 5m to grant Granted Mar 10, 2026
Patent 12572872
Mine Management System
2y 5m to grant Granted Mar 10, 2026
Patent 12567013
METHOD AND SYSTEM FOR SOLVING SUBSET SUM MATCHING PROBLEM USING DYNAMIC PROGRAMMING APPROACH
2y 5m to grant Granted Mar 03, 2026
Patent 12561703
SYSTEM AND METHOD FOR PERSONA GENERATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
87%
With Interview (+29.4%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 536 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month