Prosecution Insights
Last updated: April 19, 2026
Application No. 17/210,150

ANALYZING MACHINE LEARNING CURVES OF SOFTWARE ROBOTS

Non-Final OA §102§103
Filed
Mar 23, 2021
Examiner
TRAN, AMY NMN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
5 (Non-Final)
36%
Grant Probability
At Risk
5-6
OA Rounds
5y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
10 granted / 28 resolved
-19.3% vs TC avg
Strong +48% interview lift
Without
With
+47.9%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
24 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12-09-2025 has been entered. Response to Amendment Applicant’s submission filed 12-09-2025 has been entered. The status of the claims is as follows: Claims 1-20 remain pending in the application. Claims 1, 9 and 16 are amended. Response to Arguments Examiner Interview Summary: Examiner notes that the Interview Summary included in the Remarks pg. 9 does not correspond to the Instant Application. Applicant is requested to review and submit a corrected interview summary pertinent to the present application. In reference to the rejections under 35 U.S.C 103: Applicant’s arguments, see Remarks pg. 10-14, filed 12-09-2025, with respect to the rejection(s) of claim(s) under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Chen et al. (“Predicting explorative motor learning using decision-making and motor noise”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 8-11, 16, and 20 are rejected under 35 U.S.C. 102(a)(1) as being unpatentable over Haynes et al. (US 2019/0025841 A1) (hereafter referred to as “Haynes”) in view of May (US 2019/0147260 A1) and further in view of Gupta et al (US 2021/0004711 A1) (hereafter referred to as “Gupta”) and Chen et al. (“Predicting explorative motor learning using decision-making and motor noise”) (hereafter referred to as “Chen”) Regarding Claim 1, Haynes explicitly discloses: generating, by the computing device, a best probable learning curve, wherein the best probable learning curve is predictive of future learning by the primary CogBot for the subject; and (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0065]: “in some implementations, the ballistics motion model can perform a forward integration of this Kalman filter model to generate the predicted trajectory based on the current and/or past state of the object.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object. Stated differently, given predictions about the future locations of proximate objects, the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations.”) [best probable learning curve i.e., the predicted trajectories of the autonomous vehicle (i.e., the primary CogBot) that best navigates the vehicle] generating, by the computing device, information regarding a current status of the learning of the primary CogBot on the subject, the current status determined based on deviations of the current learning curve from the primary CogBot with respect to the best probable learning curve. (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object… the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations”, [0074]: “As one example, in some implementations, the motion planning system can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle based at least in part on the predicted future locations of the objects. For example, the cost function can describe a cost ( e.g., over time) of adhering to a particular candidate motion plan. For example, the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [the primary CogBot i.e., the autonomous vehicle, the current status of the learning of the primary CogBot i.e., when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway), the information regarding a current status of the learning of the primary CogBot i.e., the cost function, best probable learning curve i.e., motion plan generated from predicted trajectories that best navigate the autonomous vehicle relative to other objects] Haynes fails to disclose: generating, by a computing device, a graph of historic learning curves based on historic learning data over time for a subject obtained from a primary cognitive software robot (CogBot) and based on historic learning data over time for the subject obtained from at least one secondary CogBot; determining, by the computing device, a change in topical data related to the subject based on the historic learning data over time for the subject obtained from the at least one secondary CogBot; recalibrating, by the computing device, the primary CogBot by sensitizing the primary CogBot over the change in topical data related to the subject by adjusting an energy gain at the primary CogBot to adjust for the change in topical data related to the subject; determining, for multiple time periods, a least distance between a reference point on a current learning curve of the primary CogBot and points on the historic learning curves instructing, by the computing device, a device to display the information regarding the current status of the learning of the primary CogBot on a user interface However, Gupta explicitly discloses: generating, by a computing device, a graph of historic learning curves based on historic learning data over time for a subject obtained from a primary cognitive software robot (CogBot) and based on historic learning data over time for the subject obtained from at least one secondary CogBot; (Gupta, ¶[0077]: “The knowledge graph generator 115 automatically maps the process flow execution results against input variables, values obtained during process flow, the values of "internal factors" and the values of "external factors". For the mapping, the process-specific knowledge graph 120 is enhanced with "entity source", "states", "conditions" and "actions" by analysing the static process definitions (workflow and rules for decisions); and by analysing the historical data (using machine learning algorithms) generated by process execution engines. Based on the historical data, all the paths traversed by the process engine are analysed and flow pattern is captured.”) determining, by the computing device, a change in topical data related to the subject based on the historic learning data over time for the subject obtained from the at least one secondary CogBot; (Gupta, ¶[0087]: “Based on the entities identified in the process and depending upon the input variables as required by the decision tree 130, the AI conversation generator 135, such as, Watson Assistant API, is invoked to create intents and entities “, ¶[0084]: “Referring back to the RPA system 100 in FIG. 3, the knowledge graph 120 that is generated is used by a decision tree maker 125 to generate a process execution model 130 ( decision tree).”, ¶[0074]: “Further, the process-specific knowledge graph 105 is further enhanced with "internal factors" and "external factors” that influence the outcomes. In one or more embodiments of the present invention, such factors are determined by analyzing historic process data, events and logs, various systems-of-records (databases and documents) like policies, regulation, streaming events, feeds etc.) [Examiner’s note: “primary CogBot” is “the AI conversation generator, such as Watson Assistant API”, “secondary CogBot” is the “knowledge graph generated by the RPA system”; “a change in topical data related to the subject” is being interpreted as the “internal and external factors that influence the outcomes”] recalibrating, by the computing device, the primary CogBot by sensitizing the primary CogBot over the change in topical data related to the subject by adjusting an energy gain at the primary CogBot to adjust for the change in topical data related to the subject; (Gupta, ¶[0087]: “Based on the entities identified in the process and depending upon the input variables as required by the decision tree 130, the AI conversation generator 135, such as, Watson Assistant API, is invoked to create intents and entities”, ¶[0089]: “According to one or more embodiments of the present invention, an adapter is configured to monitor the changes in the process-representation 105. The adapter notifies the knowledge graph generator 115 and the decision tree generator 125 whenever the process representation 105 is modified. Depending upon the change, the decision tree 130 is updated. For instance, if the travel freeze is set to current quarter, all travel unless it is for a strategic customer or strategic deal is rejected. Accordingly, the relevant questions generated by the conversation generator 130 are to determine if the travel is for strategic customer or strategic deal.”, ¶[0090]: “From an execution standpoint the travel freeze factor (identified as external factor herein) is provided the highest weightage in the example scenario described herein. Accordingly, execution of the travel request approval begins based on the travel freeze factor. Only if the requestor 101 provides responses that identify that the travel is for strategic customer/deal, does the flow proceed with further questions to determine further approval/disapproval factors, else a notification indicating the travel approval rejected is provided.”) [Examiner’s note: “adjusting an energy gain to adjust for the changes in topical data” simply means giving more weight to the new topical data to make the model more reactive and adaptive. Here, Gupta discloses giving “travel freeze factor” the “highest weightage”, which aligns with adjusting the energy gain at the primary CogBot (the automated conversation generator in this context) to adjust for the changes in topical data. “the change in the topical data related to the subject” is being interpreted as the “travel freeze factor”, “sensitizing the primary CogBot over the change in topical data” is being interpreted as the conversation generator only approves the travel request based on the travel freeze factor] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and Gupta. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. One of ordinary skill would have motivation to combine Haynes and Gupta because knowledge graphs allow cognitive robotic automation systems to adapt more intelligently and quickly to topic changes by proving a flexible, semantically rich framework for understanding and acting on evolving knowledge. However, Chen explicitly discloses: determining, for multiple time periods, a least distance between a reference point on a current learning curve of the primary CogBot and points on the historic learning curves (Chen, Pg. 6: “we expected to see participants make larger action changes after receiving lower points and smaller action changes after receiving higher points. Using the α and β parameters from each action, we determined action change, ∇ A , between two successive actions:… as Euclidean distance between two points, i.e., PNG media_image1.png 33 289 media_image1.png Greyscale ”, Pg. 16, ¶2-3]: “Here, we examine the degree of action change on attempt t + 1 after receiving a certain score on attempt t. As mentioned (page 6), the action change was defined as the Euclidean distance between two actions… we considered whether gains and losses are better measured relative to a reference point. Prospect theory suggests that gains and losses are measured relative to a reference point that may shift with recent experience… score that was better than the reference point can be defined as a gain, and a score worse than this reference point a loss.”) [Examiner’s note: a current learning is being interpreted as the action on attempt t+1, a historic learning is being interpreted as the action on attempt t]. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and Chen. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Chen teaches predicting explorative motor learning using decision making and motor noise. One of ordinary skill would have motivation to combine Haynes and Chen because determining a least distance between a reference point in the current learning and points in the historic learning provides a predictable way to measure similarity or deviation between present and prior learned states. Such a distance-based comparison improves the system’s ability to identify the closest historical pattern, evaluate whether the current learning aligns with prior behavior, and support more accurate updating, classification, or decision-making. However, May explicitly discloses: instructing, by the computing device, a device to display the information regarding the current status of the learning of the primary CogBot on a user interface (May, ¶[0004]: “the present technology may be embodied as an application (i.e., an "app") where the application can project the potential current and future paths and locations of objects and notify individuals and other systems that may be impacted, or that are interested in obtaining this information, as detailed further herein.”, ¶[0039]: “FIG. 27 illustrates an example embodiment of a screen for displaying status information about a moving object.”) [Examiner’s note: “the primary CogBot” is being interpreted as “a moving object”, information about the current status i.e., information about potential current paths of the moving object.] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and May. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. May teaches systems and methods for moving object predictive locating, reporting and alerting. One of ordinary skill would have motivation to combine Haynes and May because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. Regarding Claim 2, the combination of Haynes, Gupta and May discloses all the limitations of Claim 1 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: wherein the computing device utilizes a processor executed efficient recursive filter algorithm to generate the best probable learning curve. (Haynes, [0065]: “in some implementations, the ballistics motion model can perform a forward integration of this Kalman filter model to generate the predicted trajectory based on the current and/or past state of the object.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object. Stated differently, given predictions about the future locations of proximate objects, the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations.”) [Examiner’s note: “recursive filter algorithm” is being interpreted as the “Kalman filter” (see Instant Specification ¶[0098]), the best probable learning curve i.e., the predicted trajectories which generate the best motion plan for the autonomous vehicle] Regarding Claim 3, the combination of Haynes, Gupta and May discloses all the limitations of Claim 1 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: obtaining, by the computing device, the historic learning data from the primary CogBot and the at least one secondary CogBot; and (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0006]: “The operations include obtaining state data descriptive of at least one current or past state of an object that is perceived by an autonomous vehicle.”, [0024]: “For example, as autonomous vehicles observe, detect, and track objects ( e.g., other humanly-operated vehicles) in their environment over time, a significant amount of data can be collected that describes the behavior of such vehicles at various particular locations over time.”) [primary CogBot i.e., an autonomous vehicle, secondary CogBot i.e., objects which are other vehicles] obtaining, by the computing device, current learning data from the primary CogBot, wherein the current status of the learning of the primary CogBot is based on comparing the current learning data form the primary CogBot with the best probable learning curve. (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0074]: “the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [When the vehicle deviates from a preferred pathway (i.e., the best probable learning curve), it means that the vehicle’s current pathway (i.e., the current learning data) is being compared to the optimal or the preferred pathway. The current learning data i.e., the cost described by a cost function.] Regarding Claim 8, the combination of Haynes, Gupta and May discloses all the limitations of Claim 1 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: wherein the computing device includes software provided as a service in a cloud environment. (Gupta, [0037]: “One or more embodiments of the present invention can be implemented using a cloud-based computing system.”) Regarding Claim 9, Haynes explicitly discloses: A computer program product comprising one or more computer readable storage media having program instructions collectively stored on the one or more computer readable storage media, the program instructions executable to: (Haynes, [0006]: “The autonomous vehicle includes one or more processors and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the one or more processors to perform operations”, [0097]: “The processor 305 is a hardware device for executing hardware instructions or software, particularly those stored in memory 310. The processor 305 may be a custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the system 300, a semiconductor based microprocessor (in the form of a microchip or chip set), a macroprocessor, or other device for executing instructions”) obtain historic learning curve data over time for a subject from a primary cognitive software robot (CogBot); (Haynes, [0006]: “The operations include obtaining state data descriptive of at least one current or past state of an object that is perceived by an autonomous vehicle.”) [primary CogBot i.e., an autonomous vehicle] obtain historic learning curve data over time for the subject from at least one secondary CogBot; (Haynes, [0006]: “The operations include obtaining state data descriptive of at least one current or past state of an object that is perceived by an autonomous vehicle.”) [primary CogBot i.e., an autonomous vehicle] [secondary CogBot i.e., objects that are perceived by an autonomous vehicle] wherein historic learning curves of the graph represent different learning paths taken by the primary CogBot and the at least one secondary CogBot for the subject over time; and (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0024]: “the nominal pathway data can have been generated based on a plurality of historical observations of vehicles or other objects over a period of time. For example, as autonomous vehicles observe, detect, and track objects ( e.g., other humanly-operated vehicles) in their environment over time, a significant amount of data can be collected that describes the behavior of such vehicles at various particular locations over time.”, Figure 6A: PNG media_image2.png 470 649 media_image2.png Greyscale , and [0128]: “FIGS. 6A and 6B depict graphical diagrams of an example goal-based prediction process according to example embodiments of the present disclosure. In particular, FIG. 6A depicts an autonomous vehicle 602 on a roadway. The autonomous vehicle 602 perceives additional objects 604 and 608… For example, nominal pathways 612 and 614 can be identified based on map data descriptive of lane information, for example as illustrated at 610.”) [Examiner interprets the learning curves as the pathway data 612, 614 of the graph 6A, primary CogBot i.e., the autonomous vehicles, second CogBot i.e., objects ( e.g., other humanly-operated vehicles) in their environment] generate a best probable learning curve, wherein the best probable learning curve is predictive of future learning by the primary CogBot for the subject. (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0065]: “in some implementations, the ballistics motion model can perform a forward integration of this Kalman filter model to generate the predicted trajectory based on the current and/or past state of the object.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object. Stated differently, given predictions about the future locations of proximate objects, the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations.”) [best probable learning curve i.e., the predicted trajectories of the autonomous vehicle (i.e., the primary CogBot) that best navigates the vehicle] the current status determined based on deviations of the current learning curve from the primary CogBot with respect to the best probable learning curve [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object… the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations”, [0074]: “As one example, in some implementations, the motion planning system can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle based at least in part on the predicted future locations of the objects. For example, the cost function can describe a cost ( e.g., over time) of adhering to a particular candidate motion plan. For example, the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [the primary CogBot i.e., the autonomous vehicle, the current status of the learning of the primary CogBot i.e., when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway), the information regarding a current status of the learning of the primary CogBot i.e., the cost function, best probable learning curve i.e., motion plan generated from predicted trajectories that best navigate the autonomous vehicle relative to other objects] Haynes fails to disclose: generating, by a computing device, a graph of historic learning curves based on historic learning data over time for a subject obtained from a primary cognitive software robot (CogBot) and based on historic learning data over time for the subject obtained from at least one secondary CogBot; determining, by the computing device, a change in topical data related to the subject based on the historic learning data over time for the subject obtained from the at least one secondary CogBot; recalibrating, by the computing device, the primary CogBot by sensitizing the primary CogBot over the change in topical data related to the subject by adjusting an energy gain at the primary CogBot to adjust for the change in topical data related to the subject; determining, for multiple time periods, a least distance between a reference point on a current learning curve of the primary CogBot and points on the historic learning curves instructing, by the computing device, a device to display the information regarding the current status of the learning of the primary CogBot on a user interface However, Gupta explicitly discloses: generating, by a computing device, a graph of historic learning curves based on historic learning data over time for a subject obtained from a primary cognitive software robot (CogBot) and based on historic learning data over time for the subject obtained from at least one secondary CogBot; (Gupta, ¶[0077]: “The knowledge graph generator 115 automatically maps the process flow execution results against input variables, values obtained during process flow, the values of "internal factors" and the values of "external factors". For the mapping, the process-specific knowledge graph 120 is enhanced with "entity source", "states", "conditions" and "actions" by analysing the static process definitions (workflow and rules for decisions); and by analysing the historical data (using machine learning algorithms) generated by process execution engines. Based on the historical data, all the paths traversed by the process engine are analysed and flow pattern is captured.”) determining, by the computing device, a change in topical data related to the subject based on the historic learning data over time for the subject obtained from the at least one secondary CogBot; (Gupta, ¶[0087]: “Based on the entities identified in the process and depending upon the input variables as required by the decision tree 130, the AI conversation generator 135, such as, Watson Assistant API, is invoked to create intents and entities “, ¶[0084]: “Referring back to the RPA system 100 in FIG. 3, the knowledge graph 120 that is generated is used by a decision tree maker 125 to generate a process execution model 130 ( decision tree).”, ¶[0074]: “Further, the process-specific knowledge graph 105 is further enhanced with "internal factors" and "external factors” that influence the outcomes. In one or more embodiments of the present invention, such factors are determined by analyzing historic process data, events and logs, various systems-of-records (databases and documents) like policies, regulation, streaming events, feeds etc.) [Examiner’s note: “primary CogBot” is “the AI conversation generator, such as Watson Assistant API”, “secondary CogBot” is the “knowledge graph generated by the RPA system”; “a change in topical data related to the subject” is being interpreted as the “internal and external factors that influence the outcomes”] recalibrating, by the computing device, the primary CogBot by sensitizing the primary CogBot over the change in topical data related to the subject by adjusting an energy gain at the primary CogBot to adjust for the change in topical data related to the subject; (Gupta, ¶[0087]: “Based on the entities identified in the process and depending upon the input variables as required by the decision tree 130, the AI conversation generator 135, such as, Watson Assistant API, is invoked to create intents and entities”, ¶[0089]: “According to one or more embodiments of the present invention, an adapter is configured to monitor the changes in the process-representation 105. The adapter notifies the knowledge graph generator 115 and the decision tree generator 125 whenever the process representation 105 is modified. Depending upon the change, the decision tree 130 is updated. For instance, if the travel freeze is set to current quarter, all travel unless it is for a strategic customer or strategic deal is rejected. Accordingly, the relevant questions generated by the conversation generator 130 are to determine if the travel is for strategic customer or strategic deal.”, ¶[0090]: “From an execution standpoint the travel freeze factor (identified as external factor herein) is provided the highest weightage in the example scenario described herein. Accordingly, execution of the travel request approval begins based on the travel freeze factor. Only if the requestor 101 provides responses that identify that the travel is for strategic customer/deal, does the flow proceed with further questions to determine further approval/disapproval factors, else a notification indicating the travel approval rejected is provided.”) [Examiner’s note: “adjusting an energy gain to adjust for the changes in topical data” simply means giving more weight to the new topical data to make the model more reactive and adaptive. Here, Gupta discloses giving “travel freeze factor” the “highest weightage”, which aligns with adjusting the energy gain at the primary CogBot (the automated conversation generator in this context) to adjust for the changes in topical data. “the change in the topical data related to the subject” is being interpreted as the “travel freeze factor”, “sensitizing the primary CogBot over the change in topical data” is being interpreted as the conversation generator only approves the travel request based on the travel freeze factor] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and Gupta. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. One of ordinary skill would have motivation to combine Haynes and Gupta because knowledge graphs allow cognitive robotic automation systems to adapt more intelligently and quickly to topic changes by proving a flexible, semantically rich framework for understanding and acting on evolving knowledge. However, Chen explicitly discloses: determining, for multiple time periods, a least distance between a reference point on a current learning curve of the primary CogBot and points on the historic learning curves (Chen, Pg. 6: “we expected to see participants make larger action changes after receiving lower points and smaller action changes after receiving higher points. Using the α and β parameters from each action, we determined action change, ∇ A , between two successive actions:… as Euclidean distance between two points, i.e., PNG media_image1.png 33 289 media_image1.png Greyscale ”, Pg. 16, ¶2-3]: “Here, we examine the degree of action change on attempt t + 1 after receiving a certain score on attempt t. As mentioned (page 6), the action change was defined as the Euclidean distance between two actions… we considered whether gains and losses are better measured relative to a reference point. Prospect theory suggests that gains and losses are measured relative to a reference point that may shift with recent experience… score that was better than the reference point can be defined as a gain, and a score worse than this reference point a loss.”) [Examiner’s note: a current learning is being interpreted as the action on attempt t+1, a historic learning is being interpreted as the action on attempt t]. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and Chen. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Chen teaches predicting explorative motor learning using decision making and motor noise. One of ordinary skill would have motivation to combine Haynes and Chen because determining a least distance between a reference point in the current learning and points in the historic learning provides a predictable way to measure similarity or deviation between present and prior learned states. Such a distance-based comparison improves the system’s ability to identify the closest historical pattern, evaluate whether the current learning aligns with prior behavior, and support more accurate updating, classification, or decision-making. However, May explicitly discloses: instructing, by the computing device, a device to display the information regarding the current status of the learning of the primary CogBot on a user interface (May, ¶[0004]: “the present technology may be embodied as an application (i.e., an "app") where the application can project the potential current and future paths and locations of objects and notify individuals and other systems that may be impacted, or that are interested in obtaining this information, as detailed further herein.”, ¶[0039]: “FIG. 27 illustrates an example embodiment of a screen for displaying status information about a moving object.”) [Examiner’s note: “the primary CogBot” is being interpreted as “a moving object”, information about the current status i.e., information about potential current paths of the moving object.] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and May. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. May teaches systems and methods for moving object predictive locating, reporting and alerting. One of ordinary skill would have motivation to combine Haynes and May because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. Regarding Claim 10, the combination of Haynes, Gupta and May explicitly discloses all the limitations of Claim 9 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: wherein the program instructions are further executable to utilize Kalman filtering to generate the best probable learning curve. (Haynes, [0065]: “in some implementations, the ballistics motion model can perform a forward integration of this Kalman filter model to generate the predicted trajectory based on the current and/or past state of the object.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object. Stated differently, given predictions about the future locations of proximate objects, the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations.”) Regarding Claim 11, the combination of Haynes, Gupta and May explicitly discloses all the limitations of Claim 9 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: wherein the program instructions are further executable to: obtain current learning data from the primary CogBot; and (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object… the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations”, [0074]: “As one example, in some implementations, the motion planning system can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle based at least in part on the predicted future locations of the objects. For example, the cost function can describe a cost ( e.g., over time) of adhering to a particular candidate motion plan. For example, the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [the primary CogBot i.e., the autonomous vehicle, the current status of the learning of the primary CogBot i.e., when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway)] generate information regarding a current status of the learning of the primary CogBot based on comparing the current learning data from the primary CogBot with the best probable learning curve. (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0074]: “the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [When the vehicle deviates from a preferred pathway (i.e., the best probable learning curve), it means that the vehicle’s current pathway (i.e., the current learning data) is being compared to the optimal or the preferred pathway. The current learning data i.e., the cost described by a cost function.] Regarding Claim 16, Haynes explicitly discloses: A system comprising: a processor, a computer readable memory, one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions executable to: (Haynes, [0006]: “The autonomous vehicle includes one or more processors and one or more non-transitory computer-readable media that collectively store instructions that, when executed by the one or more processors, cause the one or more processors to perform operations”) obtain historic learning curve data over time for a subject from a primary cognitive software robot (CogBot); (Haynes, [0024]: “the nominal pathway data can have been generated based on a plurality of historical observations of vehicles or other objects over a period of time. For example, as autonomous vehicles observe, detect, and track objects ( e.g., other humanly-operated vehicles) in their environment over time, a significant amount of data can be collected that describes the behavior of such vehicles at various particular locations over time.”, Figure 6A: PNG media_image2.png 470 649 media_image2.png Greyscale , and [0128]: “FIGS. 6A and 6B depict graphical diagrams of an example goal-based prediction process according to example embodiments of the present disclosure. In particular, FIG. 6A depicts an autonomous vehicle 602 on a roadway. The autonomous vehicle 602 perceives additional objects 604 and 608… For example, nominal pathways 612 and 614 can be identified based on map data descriptive of lane information, for example as illustrated at 610.”) [Examiner interprets the learning curves as the pathway data 612, 614 of the graph 6A, primary CogBot i.e., the autonomous vehicles, second CogBot i.e., objects ( e.g., other humanly-operated vehicles) in their environment] obtain historic learning curve data over time for the subject from at least one secondary CogBot; (Haynes, [0024]: “the nominal pathway data can have been generated based on a plurality of historical observations of vehicles or other objects over a period of time. For example, as autonomous vehicles observe, detect, and track objects ( e.g., other humanly-operated vehicles) in their environment over time, a significant amount of data can be collected that describes the behavior of such vehicles at various particular locations over time.”, Figure 6A: PNG media_image2.png 470 649 media_image2.png Greyscale , and [0128]: “FIGS. 6A and 6B depict graphical diagrams of an example goal-based prediction process according to example embodiments of the present disclosure. In particular, FIG. 6A depicts an autonomous vehicle 602 on a roadway. The autonomous vehicle 602 perceives additional objects 604 and 608… For example, nominal pathways 612 and 614 can be identified based on map data descriptive of lane information, for example as illustrated at 610.”) [Examiner interprets the learning curves as the pathway data 612, 614 of the graph 6A, primary CogBot i.e., the autonomous vehicles, second CogBot i.e., objects ( e.g., other humanly-operated vehicles) in their environment] wherein historic learning curves of the graph represent different learning paths taken by the primary CogBot and the at least one secondary CogBot for the subject over time; (Haynes, [0024]: “the nominal pathway data can have been generated based on a plurality of historical observations of vehicles or other objects over a period of time. For example, as autonomous vehicles observe, detect, and track objects ( e.g., other humanly-operated vehicles) in their environment over time, a significant amount of data can be collected that describes the behavior of such vehicles at various particular locations over time.”, Figure 6A: PNG media_image2.png 470 649 media_image2.png Greyscale , and [0128]: “FIGS. 6A and 6B depict graphical diagrams of an example goal-based prediction process according to example embodiments of the present disclosure. In particular, FIG. 6A depicts an autonomous vehicle 602 on a roadway. The autonomous vehicle 602 perceives additional objects 604 and 608… For example, nominal pathways 612 and 614 can be identified based on map data descriptive of lane information, for example as illustrated at 610.”) [Examiner interprets the learning curves as the pathway data 612, 614 of the graph 6A, primary CogBot i.e., the autonomous vehicles, second CogBot i.e., objects ( e.g., other humanly-operated vehicles) in their environment] generate a best probable learning curve, wherein the best probable learning curve is predictive of future learning by the primary CogBot for the subject; (Haynes, [0065]: “in some implementations, the ballistics motion model can perform a forward integration of this Kalman filter model to generate the predicted trajectory based on the current and/or past state of the object.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object. Stated differently, given predictions about the future locations of proximate objects, the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations.”) [best probable learning curve i.e., the predicted trajectories of the autonomous vehicle (i.e., the primary CogBot) that best navigates the vehicle] obtain current learning data from the primary CogBot; and (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object… the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations”, [0074]: “As one example, in some implementations, the motion planning system can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle based at least in part on the predicted future locations of the objects. For example, the cost function can describe a cost ( e.g., over time) of adhering to a particular candidate motion plan. For example, the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [the primary CogBot i.e., the autonomous vehicle, the current status of the learning of the primary CogBot i.e., when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway)] generate information regarding a current status of the learning of the primary CogBot on the subject, the current status determined based on deviations of the current learning curve from the primary CogBot with respect to the best probable learning curve (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object… the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations”, [0074]: “As one example, in some implementations, the motion planning system can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle based at least in part on the predicted future locations of the objects. For example, the cost function can describe a cost ( e.g., over time) of adhering to a particular candidate motion plan. For example, the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [the primary CogBot i.e., the autonomous vehicle, the current status of the learning of the primary CogBot i.e., when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway), the information regarding a current status of the learning of the primary CogBot i.e., the cost function, best probable learning curve i.e., motion plan generated from predicted trajectories that best navigate the autonomous vehicle relative to other objects] Haynes fails to disclose: generate a graph of historic learning curves based on historic learning data over time for a subject obtained from a primary cognitive software robot (CogBot) and based on historic learning data over time for the subject obtained from at least one secondary CogBot; determine a change in topical data related to the subject based on the historic learning data over time for the subject obtained from the at least one secondary CogBot; recalibrate the primary CogBot by sensitizing the primary CogBot over the change in topical data related to the subject by adjusting an energy gain at the primary CogBot to adjust for the change in topical data related to the subject; determining, for multiple time periods, a least distance between a reference point on a current learning curve of the primary CogBot and points on the historic learning curves instruct a device to display the information regarding the current status of the learning of the primary CogBot on a user interface However, Gupta explicitly discloses: generate a graph of historic learning curves based on historic learning data over time for a subject obtained from a primary cognitive software robot (CogBot) and based on historic learning data over time for the subject obtained from at least one secondary CogBot; (Gupta, ¶[0077]: “The knowledge graph generator 115 automatically maps the process flow execution results against input variables, values obtained during process flow, the values of "internal factors" and the values of "external factors". For the mapping, the process-specific knowledge graph 120 is enhanced with "entity source", "states", "conditions" and "actions" by analysing the static process definitions (workflow and rules for decisions); and by analysing the historical data (using machine learning algorithms) generated by process execution engines. Based on the historical data, all the paths traversed by the process engine are analysed and flow pattern is captured.”) determine a change in topical data related to the subject based on the historic learning data over time for the subject obtained from the at least one secondary CogBot; (Gupta, ¶[0087]: “Based on the entities identified in the process and depending upon the input variables as required by the decision tree 130, the AI conversation generator 135, such as, Watson Assistant API, is invoked to create intents and entities “, ¶[0084]: “Referring back to the RPA system 100 in FIG. 3, the knowledge graph 120 that is generated is used by a decision tree maker 125 to generate a process execution model 130 ( decision tree).”, ¶[0074]: “Further, the process-specific knowledge graph 105 is further enhanced with "internal factors" and "external factors” that influence the outcomes. In one or more embodiments of the present invention, such factors are determined by analyzing historic process data, events and logs, various systems-of-records (databases and documents) like policies, regulation, streaming events, feeds etc.) [Examiner’s note: “primary CogBot” is “the AI conversation generator, such as Watson Assistant API”, “secondary CogBot” is the “knowledge graph generated by the RPA system”; “a change in topical data related to the subject” is being interpreted as the “internal and external factors that influence the outcomes”] recalibrate the primary CogBot by sensitizing the primary CogBot over the change in topical data related to the subject by adjusting an energy gain at the primary CogBot to adjust for the change in topical data related to the subject; (Gupta, ¶[0087]: “Based on the entities identified in the process and depending upon the input variables as required by the decision tree 130, the AI conversation generator 135, such as, Watson Assistant API, is invoked to create intents and entities”, ¶[0089]: “According to one or more embodiments of the present invention, an adapter is configured to monitor the changes in the process-representation 105. The adapter notifies the knowledge graph generator 115 and the decision tree generator 125 whenever the process representation 105 is modified. Depending upon the change, the decision tree 130 is updated. For instance, if the travel freeze is set to current quarter, all travel unless it is for a strategic customer or strategic deal is rejected. Accordingly, the relevant questions generated by the conversation generator 130 are to determine if the travel is for strategic customer or strategic deal.”, ¶[0090]: “From an execution standpoint the travel freeze factor (identified as external factor herein) is provided the highest weightage in the example scenario described herein. Accordingly, execution of the travel request approval begins based on the travel freeze factor. Only if the requestor 101 provides responses that identify that the travel is for strategic customer/deal, does the flow proceed with further questions to determine further approval/disapproval factors, else a notification indicating the travel approval rejected is provided.”) [Examiner’s note: “adjusting an energy gain to adjust for the changes in topical data” simply means giving more weight to the new topical data to make the model more reactive and adaptive. Here, Gupta discloses giving “travel freeze factor” the “highest weightage”, which aligns with adjusting the energy gain at the primary CogBot (the automated conversation generator in this context) to adjust for the changes in topical data. “the change in the topical data related to the subject” is being interpreted as the “travel freeze factor”, “sensitizing the primary CogBot over the change in topical data” is being interpreted as the conversation generator only approves the travel request based on the travel freeze factor] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and Gupta. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. One of ordinary skill would have motivation to combine Haynes and Gupta because knowledge graphs allow cognitive robotic automation systems to adapt more intelligently and quickly to topic changes by proving a flexible, semantically rich framework for understanding and acting on evolving knowledge. However, Chen explicitly discloses: determining, for multiple time periods, a least distance between a reference point on a current learning curve of the primary CogBot and points on the historic learning curves (Chen, Pg. 6: “we expected to see participants make larger action changes after receiving lower points and smaller action changes after receiving higher points. Using the α and β parameters from each action, we determined action change, ∇ A , between two successive actions:… as Euclidean distance between two points, i.e., PNG media_image1.png 33 289 media_image1.png Greyscale ”, Pg. 16, ¶2-3]: “Here, we examine the degree of action change on attempt t + 1 after receiving a certain score on attempt t. As mentioned (page 6), the action change was defined as the Euclidean distance between two actions… we considered whether gains and losses are better measured relative to a reference point. Prospect theory suggests that gains and losses are measured relative to a reference point that may shift with recent experience… score that was better than the reference point can be defined as a gain, and a score worse than this reference point a loss.”) [Examiner’s note: a current learning is being interpreted as the action on attempt t+1, a historic learning is being interpreted as the action on attempt t]. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and Chen. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Chen teaches predicting explorative motor learning using decision making and motor noise. One of ordinary skill would have motivation to combine Haynes and Chen because determining a least distance between a reference point in the current learning and points in the historic learning provides a predictable way to measure similarity or deviation between present and prior learned states. Such a distance-based comparison improves the system’s ability to identify the closest historical pattern, evaluate whether the current learning aligns with prior behavior, and support more accurate updating, classification, or decision-making. However, May explicitly discloses: instruct a device to display the information regarding the current status of the learning of the primary CogBot on a user interface (May, ¶[0004]: “the present technology may be embodied as an application (i.e., an "app") where the application can project the potential current and future paths and locations of objects and notify individuals and other systems that may be impacted, or that are interested in obtaining this information, as detailed further herein.”, ¶[0039]: “FIG. 27 illustrates an example embodiment of a screen for displaying status information about a moving object.”) [Examiner’s note: “the primary CogBot” is being interpreted as “a moving object”, information about the current status i.e., information about potential current paths of the moving object.] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes and May. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. May teaches systems and methods for moving object predictive locating, reporting and alerting. One of ordinary skill would have motivation to combine Haynes and May because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art. Regarding Claim 20, the combination of Haynes, Gupta and May discloses all the limitations of Claim 16 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: wherein the best probable learning curve is generated utilizing Kalman filtering. (Haynes, [0065]: “in some implementations, the ballistics motion model can perform a forward integration of this Kalman filter model to generate the predicted trajectory based on the current and/or past state of the object.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object. Stated differently, given predictions about the future locations of proximate objects, the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations.”) Claim(s) 4-5, 12-14, 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Haynes in view of May, Gupta and in further view of Paden et al. (“A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles”) (hereafter referred to as “Paden”) Regarding Claim 4, the combination of Haynes, Gupta and May discloses all the limitations of Claim 1 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: by the computing device (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors) Haynes in view of May and Gupta fails to disclose: identifying, by the computing device, an initial set of beeps in the graph, wherein each beep comprises a homogeneous dimension which is a locus of all intersecting points of the historic learning curves; and selecting, by the computing device, a subset of the initial set of beeps by imposing a global constraint, wherein the best probable learning curve is generated based on the subset of the initial set of beeps. However, Paden explicitly discloses: identifying, by the computing device, an initial set of beeps in the graph, wherein each beep comprises a homogeneous dimension which is a locus of all intersecting points of the historic learning curves; and (Paden, Page 9, Col. 1, Section B, ¶[2]: PNG media_image3.png 137 589 media_image3.png Greyscale Page 11, Section D, Col. 2, ¶[2]: PNG media_image4.png 299 598 media_image4.png Greyscale , and Page 7, Col. 2, Section IV, ¶[2]: “The set of all allowed configurations of the vehicle is called the free configuration space and denoted Xfree”) [Graph G is denoted given vertices V which includes oi as the origin of the edge (i.e., the initial set of points of graph G). The homogeneous dimension i.e., all edges(including the starting points of edges) lie in a uniform free configuration space Xfree . Time interval t ϵ [0, T] with set of edges (or path segments) discloses the historical learning curves] selecting, by the computing device, a subset of the initial set of beeps by imposing a global constraint, (Paden, Page 11, Col. 2, Section D, ¶[1]: “In this section, we will discuss the class of methods that attempts to mitigate the problem by performing global search in the discretized version of the path space. These so-called graph search methods discretize the configuration space X of the vehicle and represent it in the form of a graph and then search for a minimum cost path on such a graph.”, Page 16, Col. 1, ¶[2]: “The rapid exploration is achieved by taking a random sample xrnd from the free configuration space and extending the tree in the direction of the random sample. In RRT, the vertex selection function select (V) returns the nearest neighbor to the random sample xrnd according to the given distance metric between the two configurations.”, Page 16, Col. 2, Algorithm 4: PNG media_image5.png 213 599 media_image5.png Greyscale ) [Algorithm 4 discloses p(x) as parent of vertex x, wherein initial vertices xinit Is assigned to set of vertices V of graph G, then function select (V) is assigned to the selected vertices x which means xselected is the subset of initial set of beeps (i.e., vertices or points). The global constraint here is the nearest distance metric between selected vertices (i.e., beeps or points)] wherein the best probable learning curve is generated based on the subset of the initial set of beeps. (Paden, Page 16, Col. 1, ¶[3]: “As shown in Algorithm 4, the RRT* at every iteration considers a set of vertices that lie in the neighborhood of newly added vertex xnew and a) connects xnew to the vertex in the neighborhood that minimizes the cost of path from xinit to xnew and b) rewires any vertex in the neighborhood to xnew if that results in a lower cost path from xinit to that vertex… It is shown that for such a function, the expected number of vertices in the ball is logarithmic in the size of the tree, which is necessary to ensure that the algorithm almost surely converges to an optimal path while maintaining the same asymptotic complexity as the suboptimal RRT.”) [xnew is the subset of the initial vertices (i.e., beeps or points) (as shown above)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta and Paden. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. One of ordinary skill would have motivation to combine Haynes, May, Gupta and Paden to improve the safety and performance of road transport through information sharing and coordination between individual vehicles. For example, the connected vehicle technology has a potential to improve throughput at intersections or prevent formation of traffic shock waves (Paden, Page 2, Col. 2¶[3]) Regarding Claim 5, the combination of Haynes, Gupta and May discloses all the limitations of Claim 1 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: obtaining, by the computing device, current learning data from the primary CogBot; (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object… the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations”, [0074]: “As one example, in some implementations, the motion planning system can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle based at least in part on the predicted future locations of the objects. For example, the cost function can describe a cost ( e.g., over time) of adhering to a particular candidate motion plan. For example, the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [the primary CogBot i.e., the autonomous vehicle, the current status of the learning of the primary CogBot i.e., when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway)] updating, by the computing device, the best probable learning curve based on the current learning data to generate an updated best probable learning curve; and (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0075]: “Thus, given information about the predicted future locations of objects, the motion planning system can determine a cost of adhering to a particular candidate pathway. The motion planning system can select or determine a motion plan for the autonomous vehicle based at least in part on the cost function(s). For example, the motion plan that minimizes the cost function can be selected or otherwise determined.”, [0168]: “The model trainer 160 can train the machine-learned models 120 and/or 140 using one or more training or learning algorithms. One example training technique is backwards propagation of errors.”) [Determining the cost function, selecting motion path which has the lowest cost by using the backwards propagation algorithm are steps pf the updating process] by the computing device (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors) Haynes in view of May and Gupta fails to disclose: repeating, by the computing device, the obtaining the current learning data from the primary CogBot and wherein the best probable learning curve is updated iteratively to generate a plurality of updated best probable learning curves over time. However, Paden explicitly discloses: repeating, by the computing device, the obtaining the current learning data from the primary CogBot and wherein the best probable learning curve is updated iteratively to generate a plurality of updated best probable learning curves over time. (Paden, Page 4, Col. 2, Section III, ¶[2]: “Modeling begins with the notion of the vehicle configuration, representing its pose or position in the world.”Page 11, Col. 2, Section D., ¶[2]: “Further, it is assumed that the initial configuration xinit is a vertex of the graph.”, Page 16, Col. 2, ¶[2]: “In RRT, the vertex selection function select (V) returns the nearest neighbor to the random sample xrnd according to the given distance metric between the two configurations. The extension function extend () then generates a path in the configuration space by applying a control for a fixed time step that minimizes the distance to xrnd.”, Page 16, Col. 2, Algorithm 4: PNG media_image6.png 689 607 media_image6.png Greyscale ) [Variable x is denoted as the pose or position of the autonomous vehicle, so the parent vertex xpar here denotes the location, pose (i.e., state) of the parent vertex (i.e., the primary CogBot or primary autonomous vehicle). Algorithm 4 discloses an iteration running in a “while-do” loop. In this loop, function extend () is used to extend the paths generation (i.e., learning curve generations) in the configuration space with the condition minimizing the distance to x (i.e., the updated best learning curves over time), and the section //find best parent is iteratively updating the current state xpar of the parent vertex (i.e., the current state or location of the primary autonomous vehicle.)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta and Paden. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. One of ordinary skill would have motivation to combine Haynes, May, Gupta and Paden to improve the safety and performance of road transport through information sharing and coordination between individual vehicles. For example, the connected vehicle technology has a potential to improve throughput at intersections or prevent formation of traffic shock waves (Paden, Page 2, Col. 2¶[3]) Regarding Claim 12, the combination of Haynes, Gupta and May discloses all the limitations of Claim 9 (as shown in the rejection above). Haynes in view of May and Gupta fails to disclose: wherein generating the best probable learning curve comprises: identifying an initial set of beeps in the graph, wherein each beep comprises a homogeneous dimension which is a locus of all intersecting points of the historic learning curves; and selecting a subset of the initial set of beeps by imposing a global constraint, wherein the best probable learning curve is generated based on the subset of the initial set of beeps. However, Paden explicitly discloses: wherein generating the best probable learning curve comprises: identifying an initial set of beeps in the graph, wherein each beep comprises a homogeneous dimension which is a locus of all intersecting points of the historic learning curves; and (Paden, Page 9, Col. 1, Section B, ¶[2]: PNG media_image3.png 137 589 media_image3.png Greyscale Page 11, Section D, Col. 2, ¶[2]: PNG media_image4.png 299 598 media_image4.png Greyscale , and Page 7, Col. 2, Section IV, ¶[2]: “The set of all allowed configurations of the vehicle is called the free configuration space and denoted Xfree”) [Graph G is denoted given vertices V which includes oi as the origin of the edge (i.e., the initial set of points of graph G). The homogeneous dimension i.e., all edges(including the starting points of edges) lie in a uniform free configuration space Xfree . Time interval t ϵ [0, T] with set of edges (or path segments) discloses the historical learning curves] selecting a subset of the initial set of beeps by imposing a global constraint, wherein the best probable learning curve is generated based on the subset of the initial set of beeps. (Paden, Page 11, Col. 2, Section D, ¶[1]: “In this section, we will discuss the class of methods that attempts to mitigate the problem by performing global search in the discretized version of the path space. These so-called graph search methods discretize the configuration space X of the vehicle and represent it in the form of a graph and then search for a minimum cost path on such a graph.”, Page 16, Col. 1, ¶[2]: “The rapid exploration is achieved by taking a random sample xrnd from the free configuration space and extending the tree in the direction of the random sample. In RRT, the vertex selection function select (V) returns the nearest neighbor to the random sample xrnd according to the given distance metric between the two configurations.”, Page 16, Col. 2, Algorithm 4: PNG media_image5.png 213 599 media_image5.png Greyscale ) [Algorithm 4 discloses p(x) as parent of vertex x, wherein initial vertices xinit Is assigned to set of vertices V of graph G, then function select (V) is assigned to the selected vertices x which means xselected is the subset of initial set of beeps (i.e., vertices or points). The global constraint here is the nearest distance metric between selected vertices (i.e., beeps or points)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta and Paden. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. One of ordinary skill would have motivation to combine Haynes, May, Gupta and Paden to improve the safety and performance of road transport through information sharing and coordination between individual vehicles. For example, the connected vehicle technology has a potential to improve throughput at intersections or prevent formation of traffic shock waves (Paden, Page 2, Col. 2¶[3]) Regarding Claim 13, the combination of Haynes, Gupta and May discloses all the limitations of Claim 9 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: wherein the program instructions are further executable to: obtain current learning data from the primary CogBot; (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0073]: “The motion planning system can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajectory (ies) for each object… the motion planning system can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations”, [0074]: “As one example, in some implementations, the motion planning system can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle based at least in part on the predicted future locations of the objects. For example, the cost function can describe a cost ( e.g., over time) of adhering to a particular candidate motion plan. For example, the cost described by a cost function can increase when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway).”) [the primary CogBot i.e., the autonomous vehicle, the current status of the learning of the primary CogBot i.e., when the autonomous vehicle strikes another object and/or deviates from a preferred pathway (e.g., a nominal pathway)] update the best probable learning curve based on the current learning data to generate an updated best probable learning curve; and (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0075]: “Thus, given information about the predicted future locations of objects, the motion planning system can determine a cost of adhering to a particular candidate pathway. The motion planning system can select or determine a motion plan for the autonomous vehicle based at least in part on the cost function(s). For example, the motion plan that minimizes the cost function can be selected or otherwise determined.”, [0168]: “The model trainer 160 can train the machine-learned models 120 and/or 140 using one or more training or learning algorithms. One example training technique is backwards propagation of errors.”) [Determining the cost function, selecting motion path which has the lowest cost by using the backwards propagation algorithm are steps pf the updating process] Haynes in view of May and Gupta fails to disclose: repeat the obtaining the current learning data from the primary CogBot and the updating the best probable learning curve, iteratively, to generate a plurality of updated best probable learning curves over time. However, Paden explicitly discloses: repeat the obtaining the current learning data from the primary CogBot and the updating the best probable learning curve, iteratively, to generate a plurality of updated best probable learning curves over time. (Paden, Page 4, Col. 2, Section III, ¶[2]: “Modeling begins with the notion of the vehicle configuration, representing its pose or position in the world.”Page 11, Col. 2, Section D., ¶[2]: “Further, it is assumed that the initial configuration xinit is a vertex of the graph.”, Page 16, Col. 2, ¶[2]: “In RRT, the vertex selection function select (V) returns the nearest neighbor to the random sample xrnd according to the given distance metric between the two configurations. The extension function extend () then generates a path in the configuration space by applying a control for a fixed time step that minimizes the distance to xrnd.”, Page 16, Col. 2, Algorithm 4: PNG media_image6.png 689 607 media_image6.png Greyscale ) [Variable x is denoted as the pose or position of the autonomous vehicle, so the parent vertex xpar here denotes the location, pose (i.e., state) of the parent vertex (i.e., the primary CogBot or primary autonomous vehicle). Algorithm 4 discloses an iteration running in a “while-do” loop. In this loop, function extend () is used to extend the paths generation (i.e., learning curve generations) in the configuration space with the condition minimizing the distance to x (i.e., the updated best learning curves over time), and the section //find best parent is iteratively updating the current state xpar of the parent vertex (i.e., the current state or location of the primary autonomous vehicle.)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta and Paden. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. One of ordinary skill would have motivation to combine Haynes, May, Gupta and Paden to improve the safety and performance of road transport through information sharing and coordination between individual vehicles. For example, the connected vehicle technology has a potential to improve throughput at intersections or prevent formation of traffic shock waves (Paden, Page 2, Col. 2¶[3]) Regarding Claim 14, the combination of Haynes, May, Gupta and Paden discloses all the limitations of Claim 13 (as shown in the rejection above). May in view of May, Gupta and Paden further discloses: wherein the program instructions are further executable to: recalibrate the primary CogBot by generating a directed acyclic graph (DAG) based on the plurality of updated best probable learning curves over time, thereby producing a recalibrated primary CogBot; and (Paden, Page 16, Col. 1, ¶[2]: “Rapidly-exploring Random Trees (RRT) [101] have been proposed by La Valle as an efficient method for finding feasible trajectories for high-dimensional non-holonomic systems.”, Page 16, Col. 1, ¶[3]: “As shown in Algorithm 4, the RRT* at every iteration considers a set of vertices that lie in the neighborhood of newly added vertex xnew and a) connects xnew to the vertex in the neighborhood that minimizes the cost of path from xinit to xnew and b) rewires any vertex in the neighborhood to xnew if that results in a lower cost path from xinit to that vertex.”) [The directed acyclic graph DAG is the rapidly-exploring random trees (RPT), updating learning curves i.e., rewires and connects new vertex that generate the optimal paths (paths with minimum cost)] deploy the recalibrated primary CogBot via a network to answer questions of the one or more users regarding the subject. (Gupta, [0116]: “Once the user finalizes the sequence and requests final outcome of the decision-making process ( e.g. clicks on user interface "OK"), the intent, entities, and action is populated to the decision tree 130. The outcome of the decision tree 130 is then output to the requestor 101, at 430 (FIG. 6).”, [0011]: “FIG. 4 illustrates a flowchart of a process execution by a robotic process automation (RPA) system according to one or more embodiments of the present invention;”) [a robotic process automation (RPA) system i.e., the primary CogBot in this context, the final outcome i.e., the recalibrated or updated version of the primary CogBot] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta and Paden. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. One of ordinary skill would have motivation to combine Haynes, May, Gupta and Paden to improve the safety and performance of road transport through information sharing and coordination between individual vehicles. For example, the connected vehicle technology has a potential to improve throughput at intersections or prevent formation of traffic shock waves (Paden, Page 2, Col. 2¶[3]) Regarding Claim 17, the combination of Haynes, Gupta and May discloses all the limitations of Claim 16 (as shown in the rejection above). Haynes in view of May and Gupta fails to disclose: wherein generating the best probable learning curve comprises: identifying an initial set of beeps in the graph, wherein each beep comprises a homogeneous dimension which is a locus of all intersecting points of the historic learning curves; and selecting a subset of the initial set of beeps by imposing a global constraint, wherein the best probable learning curve is generated based on the subset of the initial set of beeps. However, Paden explicitly discloses: wherein generating the best probable learning curve comprises: identifying an initial set of beeps in the graph, wherein each beep comprises a homogeneous dimension which is a locus of all intersecting points of the historic learning curves; and (Paden, Page 9, Col. 1, Section B, ¶[2]: PNG media_image3.png 137 589 media_image3.png Greyscale Page 11, Section D, Col. 2, ¶[2]: PNG media_image4.png 299 598 media_image4.png Greyscale , and Page 7, Col. 2, Section IV, ¶[2]: “The set of all allowed configurations of the vehicle is called the free configuration space and denoted Xfree”) [Graph G is denoted given vertices V which includes oi as the origin of the edge (i.e., the initial set of points of graph G). The homogeneous dimension i.e., all edges(including the starting points of edges) lie in a uniform free configuration space Xfree . Time interval t ϵ [0, T] with set of edges (or path segments) discloses the historical learning curves] selecting a subset of the initial set of beeps by imposing a global constraint, wherein the best probable learning curve is generated based on the subset of the initial set of beeps. (Paden, Page 11, Col. 2, Section D, ¶[1]: “In this section, we will discuss the class of methods that attempts to mitigate the problem by performing global search in the discretized version of the path space. These so-called graph search methods discretize the configuration space X of the vehicle and represent it in the form of a graph and then search for a minimum cost path on such a graph.”, Page 16, Col. 1, ¶[2]: “The rapid exploration is achieved by taking a random sample xrnd from the free configuration space and extending the tree in the direction of the random sample. In RRT, the vertex selection function select (V) returns the nearest neighbor to the random sample xrnd according to the given distance metric between the two configurations.”, Page 16, Col. 2, Algorithm 4: PNG media_image5.png 213 599 media_image5.png Greyscale ) [Algorithm 4 discloses p(x) as parent of vertex x, wherein initial vertices xinit Is assigned to set of vertices V of graph G, then function select (V) is assigned to the selected vertices x which means xselected is the subset of initial set of beeps (i.e., vertices or points). The global constraint here is the nearest distance metric between selected vertices (i.e., beeps or points)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta and Paden. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. One of ordinary skill would have motivation to combine Haynes, Gupta, May and Paden to improve the safety and performance of road transport through information sharing and coordination between individual vehicles. For example, the connected vehicle technology has a potential to improve throughput at intersections or prevent formation of traffic shock waves (Paden, Page 2, Col. 2¶[3]) Regarding Claim 18, the combination of Haynes, Gupta and May discloses all the limitations of Claim 17 (as shown in the rejection above). Haynes in view of May and Gupta further discloses: wherein the program instructions are further executable to: update the best probable learning curve based on the current learning data to generate an updated best probable learning curve; and (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors.”, [0075]: “Thus, given information about the predicted future locations of objects, the motion planning system can determine a cost of adhering to a particular candidate pathway. The motion planning system can select or determine a motion plan for the autonomous vehicle based at least in part on the cost function(s). For example, the motion plan that minimizes the cost function can be selected or otherwise determined.”, [0168]: “The model trainer 160 can train the machine-learned models 120 and/or 140 using one or more training or learning algorithms. One example training technique is backwards propagation of errors.”) [Determining the cost function, selecting motion path which has the lowest cost by using the backwards propagation algorithm are steps pf the updating process] Haynes in view of May and Gupta fails to disclose: repeat the obtaining the current learning data from the primary CogBot and the updating the best probable learning curve, iteratively, to generate a plurality of updated best probable learning curves over time However, Paden explicitly discloses: repeat the obtaining the current learning data from the primary CogBot and the updating the best probable learning curve, iteratively, to generate a plurality of updated best probable learning curves over time. (Paden, Page 4, Col. 2, Section III, ¶[2]: “Modeling begins with the notion of the vehicle configuration, representing its pose or position in the world.”Page 11, Col. 2, Section D., ¶[2]: “Further, it is assumed that the initial configuration xinit is a vertex of the graph.”, Page 16, Col. 2, ¶[2]: “In RRT, the vertex selection function select (V) returns the nearest neighbor to the random sample xrnd according to the given distance metric between the two configurations. The extension function extend () then generates a path in the configuration space by applying a control for a fixed time step that minimizes the distance to xrnd.”, Page 16, Col. 2, Algorithm 4: PNG media_image6.png 689 607 media_image6.png Greyscale ) [Variable x is denoted as the pose or position of the autonomous vehicle, so the parent vertex xpar here denotes the location, pose (i.e., state) of the parent vertex (i.e., the primary CogBot or primary autonomous vehicle). Algorithm 4 discloses an iteration running in a “while-do” loop. In this loop, function extend () is used to extend the paths generation (i.e., learning curve generations) in the configuration space with the condition minimizing the distance to x (i.e., the updated best learning curves over time), and the section //find best parent is iteratively updating the current state xpar of the parent vertex (i.e., the current state or location of the primary autonomous vehicle.)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta and Paden. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. One of ordinary skill would have motivation to combine Haynes, May, Gupta and Paden to improve the safety and performance of road transport through information sharing and coordination between individual vehicles. For example, the connected vehicle technology has a potential to improve throughput at intersections or prevent formation of traffic shock waves (Paden, Page 2, Col. 2¶[3]) Regarding Claim 19, the combination of Haynes, May, Gupta and Paden discloses all the limitations of Claim 18 (as shown in the rejection above). Haynes in view of May, Gupta and Paden further discloses: wherein the program instructions are further executable to: recalibrate the primary CogBot by generating a directed acyclic graph (DAG) based on the plurality of updated best probable learning curves over time, thereby producing a recalibrated primary CogBot; and (Paden, Page 16, Col. 1, ¶[2]: “Rapidly-exploring Random Trees (RRT) [101] have been proposed by La Valle as an efficient method for finding feasible trajectories for high-dimensional non-holonomic systems.”, Page 16, Col. 1, ¶[3]: “As shown in Algorithm 4, the RRT* at every iteration considers a set of vertices that lie in the neighborhood of newly added vertex xnew and a) connects xnew to the vertex in the neighborhood that minimizes the cost of path from xinit to xnew and b) rewires any vertex in the neighborhood to xnew if that results in a lower cost path from xinit to that vertex.”) [The directed acyclic graph DAG is the rapidly-exploring random trees (RPT), updating learning curves i.e., rewires and connects new vertex that generate the optimal paths (paths with minimum cost)] provide the recalibrated primary CogBot to one or more users via a network to answer inquiries regarding the subject. (Gupta, [0116]: “Once the user finalizes the sequence and requests final outcome of the decision-making process ( e.g. clicks on user interface "OK"), the intent, entities, and action is populated to the decision tree 130. The outcome of the decision tree 130 is then output to the requestor 101, at 430 (FIG. 6).”, [0011]: “FIG. 4 illustrates a flowchart of a process execution by a robotic process automation (RPA) system according to one or more embodiments of the present invention;”) [a robotic process automation (RPA) system i.e., the primary CogBot in this context, the final outcome i.e., the recalibrated or updated version of the primary CogBot] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta and Paden. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. One of ordinary skill would have motivation to combine Haynes, May, Gupta and Paden to improve the safety and performance of road transport through information sharing and coordination between individual vehicles. For example, the connected vehicle technology has a potential to improve throughput at intersections or prevent formation of traffic shock waves (Paden, Page 2, Col. 2¶[3]) Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Haynes et al. (US 2019/0025841 A1) (hereafter referred to as “Haynes”) in view of May, Paden, Gupta and further in view of Ghadirzadeh et al. (“Self-learning and adaptation in a sensorimotor framework”) (hereafter referred to as “Ghadirzadeh”) Regarding Claim 6, the combination of Haynes, May, Gupta and Paden discloses all the limitations of Claim 5 (as shown in the rejection above). Haynes in view of May, Gupta and Paden further discloses: recalibrating, by the computing device, the primary CogBot by generating a directed acyclic graph (DAG) based on the plurality of updated best probable learning curves over time, thereby producing a recalibrated primary CogBot; and (Paden, Page 16, Col. 1, ¶[2]: “Rapidly-exploring Random Trees (RRT) [101] have been proposed by La Valle as an efficient method for finding feasible trajectories for high-dimensional non-holonomic systems.”, Page 16, Col. 1, ¶[3]: “As shown in Algorithm 4, the RRT* at every iteration considers a set of vertices that lie in the neighborhood of newly added vertex xnew and a) connects xnew to the vertex in the neighborhood that minimizes the cost of path from xinit to xnew and b) rewires any vertex in the neighborhood to xnew if that results in a lower cost path from xinit to that vertex.”) [The directed acyclic graph DAG is the rapidly-exploring random trees (RPT), updating learning curves i.e., rewires and connects new vertex that generate the optimal paths (paths with minimum cost)] providing, by the computing device, the recalibrated primary CogBot to one or more users via a network to answer inquiries regarding the subject. (Gupta, [0116]: “Once the user finalizes the sequence and requests final outcome of the decision-making process ( e.g. clicks on user interface "OK"), the intent, entities, and action is populated to the decision tree 130. The outcome of the decision tree 130 is then output to the requestor 101, at 430 (FIG. 6).”, [0011]: “FIG. 4 illustrates a flowchart of a process execution by a robotic process automation (RPA) system according to one or more embodiments of the present invention;”) [a robotic process automation (RPA) system i.e., the primary CogBot in this context, the final outcome i.e., the recalibrated or updated version of the primary CogBot] Haynes in view of May, Gupta and Paden fails to disclose: training, by the computing device, a tertiary CogBot based on the recalibrated primary CogBot However, Ghadirzadeh explicitly discloses: training, by the computing device, a tertiary CogBot based on the recalibrated primary CogBot (Ghadirzadeh, Page 3, Col. 2, Section 3: “The forward model is initially trained by a few randomly generated training samples. In each subsequent iteration a new action-observation pair is available and a prediction is performed. A poor prediction could be caused by either the lack of a sufficient amount of training data or by a sudden change in the environment not captured by the current model. In the former case, the model should be updated while in the latter case it needs to be adapted… If the query input is close to a training point but the prediction is poor the model has to adapt and replace the interfering data point with the new data sample. Regardless of which, the current action-observation pair will be added to the training data.”) [Examiner’s note: The highlight discloses the process of recalibrating (i.e., updating or adapting the model) based on new data or changing conditions. This continuous updating and adaptation of the model aligns with the concept of using a recalibrated primary system to train a tertiary system. The “tertiary CogBot” is being interpreted as the model being incrementally updated with new data to improve its predictions.] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, May, Gupta, Paden and Ghadirzadeh. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. Ghadirzadeh teaches a general framework to autonomously achieve a task, where autonomy is acquired by learning sensorimotor patterns of a robot, while it is interacting with its environment. One of ordinary skill would have motivation to combine Haynes, Gupta, Paden, May and Ghadirzadeh because recalibrating the primary CogBot with updated data and new observations allows the model to better handle real-world conditions or unexpected changes (Ghadirzadeh, Page 3, Col. 2, Section 3) Claim(s) 7 is rejected under 35 U.S.C. 103 as being unpatentable over Haynes in view of May, Paden, Gupta, Ghadirzadeh and further in view of Shalev-Shwartz (“Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving”) (hereafter referred to as “Shalev-Shwartz”) Regarding Claim 7, the combination of Haynes, Paden, May, Ghadirzadeh and Gupta discloses all the limitations of Claim 6 (as shown in the rejection above). Haynes in view of May, Paden, Ghadirzadeh and Gupta further discloses: by the computing device (Haynes, [0005]: “One aspect of the present disclosure is directed to a computer system. The computer system includes one or more processors) periodically evaluating, by an analytics server, the status of maturity of the primary CogBot’s learning with respect to the best probable learning curve (Haynes, ¶[0090]: “The motion planning system 105 can determine a motion plan for the autonomous vehicle based at least in part on the predicted trajector(ies) for each object. Stated differently, given predictions about the future locations of proximate objects, the motion planning system 105 can determine a motion plan for the autonomous vehicle that best navigates the vehicle relative to the objects at their future locations.”, ¶[0142]: “Similarly to the goal scoring process, the score generated by the trajectory scoring model 314 for each predicted trajectory can be compared to a threshold score. In some implementations, each trajectory that is found to be satisfactory ( e.g., receives a score higher than the threshold score) can be used ( e.g., passed on to the motion planning system), as shown at 318. Alternatively, a certain number of the highest scoring trajectories can be used at 318.”) [Examiner’s note: Haynes discloses the motion planning system evaluating different predicted trajectories and assigning scores to them based on how well they allow the vehicle to navigate relative to future object locations. This scoring process parallels the assessment of a learning curve (i.e., the trajectories), as each trajectory’s score reflects its alignment with the “best probable” path, akin to evaluating how well a learning algorithm’s performance matches an ideal curve. Here, the trajectories that score above a threshold are considered “mature”.] recalibrating, by the computing device, the primary CogBot based on the periodically evaluating (Haynes, ¶[0091]: “As one example, in some implementations, the motion planning system 105 can determine a cost function for each of one or more candidate motion plans for the autonomous vehicle 10 based at least in part on the predicted future locations of the objects.”, ¶0092]: “The motion planning system 105 can select or determine a motion plan for the autonomous vehicle 10 based at least in part on the cost function(s). For example, the motion plan that minimizes the cost function can be selected or otherwise determined. The motion planning system 105 can provide the selected motion plan to a vehicle controller 106 that controls one or more vehicle controls 107 (e.g., actuators that control gas flow, steering, braking, etc.) to execute the selected motion plan.”) [Examiner’s note: The process of the motion planning system selecting the route that minimizes the cost function aligns with recalibrating the autonomous vehicle (i.e., the primary CogBot) based on ongoing evaluations to optimize its route.] Haynes in view of Paden, May, Ghadirzadeh and Gupta fail to disclose: providing, by the computing device, information regarding a status of maturity of the primary CogBot's learning based on a gradient of the DAG. However, Shalev-Shwartz explicitly discloses: providing, by the computing device, information regarding a status of maturity of the primary CogBot's learning based on a gradient of the DAG. (Shalev-Shwartz, Page 7, ¶[2]: “ PNG media_image7.png 183 631 media_image7.png Greyscale ”Page 8, ¶[2]: “An options graph represents a hierarchical set of decisions organized as a Directed Acyclic Graph (DAG). There is a special node called the “root” of the graph.”, Page 7, Section 5, ¶[1]: “We saw that through RL alone a system complying with functional safety will suffer a very high and unwieldy variance on the reward R(s) and this can be fixed by splitting the problem formulation into a mapping from (agnostic) state space to Desires using policy gradient iterations followed by a mapping to the actual trajectory which does not involve learning”) [primary CogBot i.e., host vehicle, status of maturity i.e., reaching desired states (i.e., speed or position)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, Paden, May, Gupta, Ghadirzadeh and Shalev-Shwartz. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. May teaches systems and methods for moving object predictive locating, reporting and alerting. Paden teaches motion planning and control techniques for self-driving cars. Ghadirzadeh teaches a general framework to autonomously achieve a task, where autonomy is acquired by learning sensorimotor patterns of a robot, while it is interacting with its environment. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. Shalev-Shwartz teaches a safe, multi-agent, reinforcement learning method for autonomous driving vehicles. One of ordinary skill would have motivation to combine Haynes, Paden, May, Gupta, Ghadirzadeh and Shalev-Shwartz to apply the immediate benefit of the graph which is the interpretability of the results, and that the graph can be relied on for the decomposable structure of the set, this structure allows to reduce the variance of the policy gradient estimator (Shalev-Shwartz, Page 8, ¶[4]) Claim(s) 15 is rejected under 35 U.S.C. 103 as being unpatentable over Haynes in view of May, Paden, Gupta, and further in view of Shalev-Shwartz (“Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving”) (hereafter referred to as “Shalev-Shwartz”) Regarding Claim 15, the combination of Haynes, May, Paden and Gupta discloses all the limitations of Claim 14 (as shown in the rejection above). Haynes in view of May, Paden and Gupta fails to disclose: wherein the program instructions are further executable to provide information regarding a status of maturity of the primary CogBot's learning based on a gradient of the DAG However, Shalev-Shwartz explicitly discloses: wherein the program instructions are further executable to provide information regarding a status of maturity of the primary CogBot's learning based on a gradient of the DAG. (Shalev-Shwartz, Page 7, ¶[2]: “ PNG media_image7.png 183 631 media_image7.png Greyscale ”Page 8, ¶[2]: “An options graph represents a hierarchical set of decisions organized as a Directed Acyclic Graph (DAG). There is a special node called the “root” of the graph.”, Page 7, Section 5, ¶[1]: “We saw that through RL alone a system complying with functional safety will suffer a very high and unwieldy variance on the reward R(s) and this can be fixed by splitting the problem formulation into a mapping from (agnostic) state space to Desires using policy gradient iterations followed by a mapping to the actual trajectory which does not involve learning”) [primary CogBot i.e., host vehicle, status of maturity i.e., reaching desired states (i.e., speed or position)] It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Haynes, Paden, May, Gupta and Shalev-Shwartz. Haynes teaches an autonomous vehicle which includes a prediction system that, for each object perceived by the autonomous vehicle, generates many potential goals, selects the potential goals and develops multiple trajectories based on selected goals. Paden teaches motion planning and control techniques for self-driving cars. May teaches systems and methods for moving object predictive locating, reporting and alerting. Gupta teaches generating knowledge graphs to adapt to changes in topical data in cognitive robotic process automation. Shalev-Shwartz teaches a safe, multi-agent, reinforcement learning method for autonomous driving vehicles. One of ordinary skill would have motivation to combine Haynes, Paden, May, Gupta and Shalev-Shwartz to apply the immediate benefit of the graph which is the interpretability of the results, and that the graph can be relied on for the decomposable structure of the set, this structure allows to reduce the variance of the policy gradient estimator (Shalev-Shwartz, Page 8, ¶[4]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMY TRAN whose telephone number is (571)270-0693. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached on (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMY TRAN/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Mar 23, 2021
Application Filed
May 02, 2024
Non-Final Rejection — §102, §103
Jul 25, 2024
Applicant Interview (Telephonic)
Jul 25, 2024
Examiner Interview Summary
Aug 20, 2024
Response Filed
Nov 09, 2024
Final Rejection — §102, §103
Jan 02, 2025
Interview Requested
Jan 16, 2025
Examiner Interview Summary
Jan 16, 2025
Applicant Interview (Telephonic)
Jan 21, 2025
Response after Non-Final Action
Feb 07, 2025
Request for Continued Examination
Feb 11, 2025
Response after Non-Final Action
Mar 22, 2025
Non-Final Rejection — §102, §103
Jun 26, 2025
Applicant Interview (Telephonic)
Jun 26, 2025
Examiner Interview Summary
Jul 01, 2025
Response Filed
Oct 10, 2025
Final Rejection — §102, §103
Nov 25, 2025
Examiner Interview Summary
Nov 25, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Response after Non-Final Action
Feb 20, 2026
Request for Continued Examination
Mar 04, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602582
DYNAMIC DISTRIBUTED TRAINING OF MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12468932
IDENTIFYING RELATED MESSAGES IN A NATURAL LANGUAGE INTERACTION
2y 5m to grant Granted Nov 11, 2025
Patent 12462185
SCENE GRAMMAR BASED REINFORCEMENT LEARNING IN AGENT TRAINING
2y 5m to grant Granted Nov 04, 2025
Patent 12423589
TRAINING DECISION TREE-BASED PREDICTIVE MODELS
2y 5m to grant Granted Sep 23, 2025
Patent 12288074
GENERATING AND PROVIDING PROPOSED DIGITAL ACTIONS IN HIGH-DIMENSIONAL ACTION SPACES USING REINFORCEMENT LEARNING MODELS
2y 5m to grant Granted Apr 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
36%
Grant Probability
84%
With Interview (+47.9%)
5y 2m
Median Time to Grant
High
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month