DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/13/2026 has been entered.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 1/13/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Respond to Amendment
Applicant’s amendment filed on 1/13/2026 has been entered.
Claim 1 – 20 are pending and have been examined.
Claim 1, 13 and 20 are amended.
Respond to Argument
Claim rejection under 35 U.S.C. 112 has been withdrawn in light of the applicant’s remarks filed on 1/13/2026.
Applicant’s arguments, see page 11 – 15, with respect to the rejection(s) of claim(s) 1 – 20 under U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Yang et al., CN113534662.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1 – 7 and 11 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Fang, et al., (hereinafter Fang) CN112235419, in view of Yang et al., (hereinafter Yang) CN113534662, further in view of Colledanchise et al., (hereinafter Colledanchise), “Behavior Trees in Robotics and AI”.
Regarding Claim 1, Fang discloses: A system comprising:
one or more processing units (translation page 2, “cloud computing, in particular to a robot cloud platform execution engine”; the cloud computing platform has processing units) to:
identify, from a template library and based at least on a mission request, a mission template that defines a baseline task sequence (translation 0013 – 0015, “interaction layer receives a request (mission request), the session management module starts a session … calling the algorithm knowledge base (template library) service provided by the cloud platform, the algorithm knowledge base generates a corresponding task plan (mission template) according to the context in the session”) that comprises a plurality of local tasks performed by a plurality of task agents, the plurality of task agents being determined based at least on the mission template (translation 0013, ”according to the requested scenario, the scenario configuration module loads pre-made (pre-defined) configuration parameters, including the type of robot required, the number of robots … a behavior tree is generated according to the scenario configuration, and finally the control of the robot is realized by cyclically executing the behavior tree”; i.e., for tasks that require multiple robots, the system generates task sequences (local tasks) for each participating robots (task agents));
generate a mission behavior logic framework based at least on the plurality of local tasks of the baseline task sequence, the mission behavior logic framework including at least a first task sequence associated with a first task agent of the plurality of task agents and a second task sequence associated with second task agent of the plurality of task agents (refer to the mapping above, the framework generate for each of participation robots an associated sequence/tree of tasks);
correlate the plurality of local tasks of the baseline task sequence with a plurality of pre-defined modular behavior models from a task library to identify one or more baseline behavior models to attach to individual local tasks of the mission behavior logic framework (refer to the mapping above, & translation 0016, “Each child node represents a sub-function module (local tasks), and each leaf node represents a condition or a behavior … subtree structure can be modularized and reused”; i.e., the generated overall behavior tree (baseline task sequence) with child nodes (local tasks) that represents a sub-function is a behavior model logic framework. The corresponding reusable subtrees are pre-defined modular behavior models from library and are attached to these child nodes);
operate one or more mobile autonomous machine agents of the plurality of task agents by distributing one or more segments of the behavior model logic framework (fig. 1, translation 0010, “the instruction distribution module is used to process the communication between the server application and the robot”; i.e., operate robots by distributing the task instructions to each robots).
Fang does not explicitly teach:
the mission behavior logic framework including one or more synchronization tasks to coordinate performance of one or more local tasks of the first task sequence with one or more local tasks of the second task sequence;
augment the plurality of pre-defined modular behavior models to generate a plurality of custom behavior models based at least on one or more task customization parameters;
Yang, in the same field of endeavor, explicitly teach:
the mission behavior logic framework including one or more synchronization tasks to coordinate performance of one or more local tasks of the first task sequence with one or more local tasks of the second task sequence (Yang, Fig. 6 & translation page 12 – 13, “the invention provides an event-driven behavior tree implementation”, “a synchronization mechanism between a behavior tree engine and action nodes … can effectively address … inter-platform (between multiple task agents) communication for unmanned systems.”, “Within the framework of the invention, a new event is generated when an action node updates its status from running to success/failure or writes a key on the blackboard”; i.e., the action node of a behavior tree performs synchronization task that raises event to communicates with another platform/agent);
Fang and Yang both teach using behavior tree to define/coordinate collaborative tasks among multiple robot agents and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the detail implementation of task synchronization between robot agents taught by Yang in the system of Fang to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification to increase performance and the capability to synchronize between complex multi-tree/multi-platform application (Yang, translation page 6).
Fang and Yang combination does not explicitly teach:
augment the plurality of pre-defined modular behavior models to generate a plurality of custom behavior models based at least on one or more task customization parameters;
Colledanchise, in the same field of endeavor, explicitly teach:
augment the plurality of pre-defined modular behavior models to generate a plurality of custom behavior models based at least on one or more task customization parameters (Colledanchise, sec. 7.1.2, “the action templates, which contains the descriptive model of an action … An action template(pre-defined modular behavior models) is mapped online into an action primitive (custom behavior models), which contains the operational model of an action and is executable”; Fig. 7.4, “Action primitive created from the Template in (a), where the object is given as i = cube”; , Pick(i) is the correlated action template. Pick(cube), a custom behavior models, is created from Pick(i), which is a pre-defined modular behavior models, based on a parameter (cube) of the task sequence created by the mission request);
Fang (in view of Yang) and Colledanchise both teach robotic programming using behavior tree and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the action template of Colledanchise’s teaching in the system of Fang (in view of Yang) to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification in order to modulate actions which “with the addition of improve reactivity, safety and fault-tolerance” (Colledanchise sec. 7.1).
Regarding Claim 2, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination further teach: execute a proxy representation of a first custom behavior model of the plurality of custom behavior models to represent an estimated state of a first task agent of the plurality of task agents that does not receive the first custom behavior model (Colledanchise, section 7.1.8, “By fault tolerant we mean the capability of operating properly in the event of Failures. The robots we consider usually have more than a single way of carrying out a high level task. With multiple options, we can define a priority measure for these options. In PA-BT, the actions that achieve a given goal are collected in a Fallback composition (Algorithm 6 Line 9). The BT execution is such that if, at execution time, one option fails (e.g. a motor breaks) the next option is chosen without replanning or extending the BT.”; Alg. 6 & Remark 7.1, “Note that the conditions of an action template can contain a disjunction of propositions. This can be encoded by a Fallback composition of the corresponding Condition nodes”; i.e., The condition node performs/execute an action to obtain the status of the corresponding agent to check for faults. For a sequential behavior tree node, if robot fail to perform an action, the conditional node keep checking the status of the robot and the system does not send the following action (first custom behavior model) to the robot).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the fault tolerance of Colledanchise’s teaching in the system of Fang (in view of Yang) to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification in order to handle the uncertainty of the system (Colledanchise sec. 7.1.8).
Regarding Claim 3, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination further teach: operate the first task agent based on the first custom behavior model (refer to the mapping in Claim 2, once the system detect that a fault exist, the system performs fallback and operate robot using the next priority option in order to move robot to the following action (first custom behavior model)) using commands communicated to an application programming interface (API) (Fang, 0001, “the server application generates robot instructions, which are sent to the robot through the middleware (API), and the robot parses the generated robot instructions to generate behaviors, and feeds back the robot instruction execution results to the server application through the middleware.”).
Regarding Claim 4, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination further teach: generate one or more updated custom behavior models based on information received from the one or more mobile autonomous machine agents; and re-assemble the behavior model logic framework based at least on the one or more updated custom behavior models (Colledanchise, Fig. 7.1 – 7.2 & 7.5 sec., sec. 7.1.2, “Example 7.2. Here we show a more complex example highlighting two main properties of PA-BT: the livelock freedom and the continual deliberative plan and act cycle. This example is an extension of Example 7.1 where, due to the dynamic environment, the robot needs to replan. Consider the execution of the final BT in Figure 7.2e of Example 7.1, where the robot is carrying the desired object to the goal location. Suddenly, as in Figure 7.3 (b), an object s obstructs the (only possible) path. Then the condition τ Ϲ CallFree returns Failure and Algorithm 5 expands the tree accordingly (Line 10) as in Figure 7.5a”; Colledanchise teaches dynamically replan the required action based on information generated by robot agent (on conditional nodes). The algorithm expands/re-assemble the behavior tree (behavior model logic framework)).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the dynamic replanning of Colledanchise’s teaching in the system of Claim 1 to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification in order to respond to the dynamic environment (Colledanchise sec. 7.1.2).
Regarding Claim 5, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination further teach: determine the one or more task customization parameters based at least on the mission request (refer to the mapping in Claim 1, & Colledanchise Sec. 7.1.2 & Fig. 7.4, cube (parameter) is requested in the mission to be picked up).
Regarding Claim 6, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination further teach: determine the one or more task customization parameters based on at least one navigation route for the one or more mobile autonomous machine agents derived based at least on the mission request (Colledanchise, sec. 7.1, “Example 7.1. The robot in Figure 7.3 is given the task to move the green cube into the rectangle marked GOAL”; fig. 7.3, “the nominal plan is to MoveTo(c) -> Pick(c) -> MoveTo(g) -> Drop() … the extended plan is to MoveTo(s) -> Push(s) -> MoveTo(c) -> Pick(c) -> MoveTo(g) -> Drop()”; Colledanchise teaches that the mission request is in high level and the navigation route is derived based on the mission and the condition. The distance/location of moving actions are the parameters of the actions and are determined based on the route).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further apply the behavior tree technique on the robot application that involving navigation in a defined space as disclosed by Colledanchise to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to apply this technique because “BTs are a very efficient way of creating complex systems that are both modular and reactive.” (Colledanchise, chapter 1)
Regarding Claim 7, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination further teach: determine the one or more task customization parameters based at least on a facility map (refer to the mapping in Claim 6 & sec. 7.1, fig. 7.3, Colledanchise teaches applying the behavior tree method on the robot application within a known space/environment; Sec. 7.2.1, “The working memory is a container for any information the agent must access during execution (e.g. unit’s position on the map)”; The mission of moving the green cube into the rectangle marked GOAL can be represented by positions on a map of the space).
The reason for combination is same as Claim 6.
Regarding Claim 11, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination further teach: the plurality of task agents comprise at least one of: an autonomous robot, an autonomous mobile machine, an ego vehicle, an ego machine, an automated door, an elevator, a transport platform, a facility management system, a mechanical tool, an electrical tool, a sensor device, a container, or a composition of matter (Fang, 0002, “technical field of intelligent robots”).
Regarding Claim 12, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination further teach: the system is comprised in at least one of:
a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for three-dimensional assets;
a system for generating or presenting at least one of virtual reality content, augmented reality content, or mixed reality content;
a system for performing deep learning operations;
a system for performing real-time streaming;
a system implemented using an edge device;
a system implemented using a robot; a system for performing conversational AI operations; a system implementing one or more language models; a system for generating synthetic data;
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center; or
a system implemented at least partially using cloud computing resources.
(Fang, 0007, “a robot cloud platform execution engine (control system for autonomous robot/machine) based on behavior tree”)
Regarding Claim 13, 16 – 19, these are the corresponding processor claim of Claim 1, 4-6 and 12. These claims are rejected with same reason.
Regarding Claim 14, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 13. The combination further teach: select the plurality of task agents based at least on the mission template (refer to the mapping in Claim 1 & Fang translation 0013, the mission template determines a number of robots required).
Regarding Claim 15, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 13. The combination further teach: the mission template defines the baseline task sequence using a data format that comprises one or more of a JavaScript Object Notation (JSON) or a natural language description (Fang, translation page 3, “behavior tree schema based on json … loading and storing the global parameters during operation through json file configuration”).
Regarding Claim 20, Claim 20 is the corresponding method claim of Claim 1. The recited limitation of “mission framework” corresponds to the “behavior model logic framework” in Claim 1. Claim 20 is rejected with the same reason.
Claim(s) 8 – 10 are rejected under 35 U.S.C. 103 as being unpatentable over Fang, et al., (hereinafter Fang) CN112235419, Yang et al., (hereinafter Yang) CN113534662, and Colledanchise et a., (hereinafter Colledanchise), “Behavior Trees in Robotics and AI” as applied to claim 1 above, and further in view of Pack et al., (hereinafter Pack), US20120010772.
Regarding Claim 8, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination does not explicitly teach: update the task library of the plurality of pre-defined modular behavior models based at least in part on one or more of the plurality of custom behavior models.
Pack, in the same field of endeavor, explicitly teach:
update the task library of the plurality of pre-defined modular behavior models based at least in part on one or more of the plurality of custom behavior models (Pack, 0060, “Behaviors are composable (custom), reusable components that implement the core behavior interface and protocol.”; 0201, “The behavior library is a built-in collection of behaviors and conditions (in addition to the core compound behaviors) that form a growing set of reusable parts for users of the advanced behavior engine in applications.”; i.e., user can custom build behavior models and can reuse the model by adding into behavior library).
Fang (in view of Yang and Colledanchise) and Pack both teach robotic programming using behavior tree and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to further include the update of behavior template/library of Pack’s teaching in the system of Fang (in view of Yang and Colledanchise) to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification because “the associated libraries of reusable behaviors and conditions, the simpler and more powerful the new behavior architecture becomes” (Pack, 0186).
Regarding Claim 9, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 8. The combination does not explicitly teach: update one or more mission templates of the template library based at least on the update to the task library.
Pack, in the same field of endeavor, explicitly teach:
update one or more mission templates of the template library based at least on the update to the task library (Pack, 0133, “collections of behavior nodes built into a tree for a specific capability is ‘assemblage,’ which covers the basic idea that any particular outward ‘capability’ of a remote vehicle is actually a collection of cooperating behaviors” 0145, “Building Behavioral Applications: One consequence of having more structured and modularized behavior trees is the need to instantiate a sequence of configured behavior trees (i.e., assemblages) over time to accomplish specific goals with the remote vehicle. The concept of using assemblages and assemblage libraries is shown in FIG. 19.”; Refer to the mapping in Claim 8, Pack teaches to reuse newly created/updated behavior in the behavior library (update to the task library); to assemble behaviors into assemblage and store assemblage in assemblage library (mission template), and to create application using assemblage from the assemblage library. It is obvious to one of skilled in the art that in order to use the newly created/updated behavior, the assemblage library is updated to include assemblage that has the newly created/updated behavior).
The reason for combination is same as Claim 8.
Regarding Claim 10, Fang, Yang and Colledanchise combination renders obviousness of all the limitation of Claim 1. The combination does not explicitly teach: update one or more mission templates of the template library based at least on a change to a class of one or more tasks agents that define the plurality of task agents.
Pack, in the same field of endeavor, explicitly teach:
update one or more mission templates of the template library based at least on a change to a class of one or more tasks agents that define the plurality of task agents (Pack, 0167 – 0174, “provide an assemblage library (mission template) class that uses naming conventions to automatically load available assemblages from various directories of the remote vehicle (task agent) and make them available at runtime ... The assemblage is very similar to the concept of a ‘class object’ or a ‘meta-class object’”; Pack teach that assemblage is agent dependent, different agent need different assemblage to control. Change agent also change the assemblage (class object) for the agent ).
Fang (in view of Yang and Colledanchise) and Pack both teach robotic programming using behavior tree and are analogous. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention with a reasonable likelihood of success to implement the system of Fang (in view of Yang and Colledanchise) with the details of the classes taught by Pack to achieve the claimed teaching. One of the ordinary skill in the art would have motivated to make this modification in order to implement the behavior tree.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: Paxton, “Instructing collaborative robots with behavior trees and vision”, which teach using behavior tree to design and perform collaborative robot task with human agent.
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIEN MING CHOU whose telephone number is (571)272-9354. The examiner can normally be reached Monday- Friday 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HITESH PATEL can be reached on 571-270-5442. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIEN MING CHOU/Examiner, Art Unit 3667
/Hitesh Patel/Supervisory Patent Examiner, Art Unit 3667
3/16/26