Prosecution Insights
Last updated: April 19, 2026
Application No. 18/323,027

CAUSAL GRAPH CHAIN REASONING PREDICTIONS

Non-Final OA §101§103§112
Filed
May 24, 2023
Examiner
CAIADO, ANTONIO J
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
Honda Motor Co. Ltd.
OA Round
5 (Non-Final)
69%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
130 granted / 188 resolved
+14.1% vs TC avg
Strong +50% interview lift
Without
With
+49.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
23 currently pending
Career history
211
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
3.9%
-36.1% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 188 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION 1. Claims 1-3, 6-8, 10-11, 14-17 and 20 are pending in this application. Notice of Pre-AIA or AIA Status 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. §102 and §103 (or as subject to pre-AIA 35 U.S.C. §102 and §103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Amendment 3. This office action is in response to applicant’s amendment filed on 01/22/2026 in response to the advisory action mailed on 01/14/2026. Claims 2-3, 7 and 15-16 have been kept original. Claims 6, 8, 14 and 20 have been previously presented. Claims 1, 10-11 and 17 have been amended. Claim 4-5, 9, 12-13 and 18-19 have been Cancelled. Amendment has been entered. Response to Arguments 4. Applicant's arguments, filed on 01/22/2026, with respect to the rejection of claims 1-3, 6-8, 10-11, 14-17 and 20 under 35 U.S.C. §103 (Applicant’s arguments, pages 6-13), have been fully considered but are moot. The Examiner is no longer relying in the prior art of Fujino et al. (US 20240317264 A1) teach any claimed limitation. Applicant's arguments, filed on 01/22/2026, with respect to the rejection of claims 1-3, 6-8, 10-11, 14-17 and 20 under 35 U.S.C. §101 an abstract idea (mental process) (Applicant’s arguments, pages 9-12), have been fully considered but are not persuasive. Respectfully, the examiner disagrees, see the clarification below. The rejection is based on 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG). The Applicant’s argument regarding “The claims specifically require implementing, via a vehicle system and an ego-vehicle, the action generated by the processor, wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment.” As it says require/invokes a vehicle system and ego-vehicle. This is an example of a claim that invokes computers or other machinery merely as a tool to perform an existing process. See MPEP 2106.05(f)(2). It also noted that “Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer.” See MPEP 2106.04(III) – Mental Processes. The warning element is merely an instruction used to implement the abstract idea. A generically claimed “warning” exemplifies a claim limitation with broad applicability across many fields of endeavor; such claims often fail to provide the meaningful limitations necessary to integrate a judicial exception into a practical application or to amount to significantly more. See MPEP See MPEP 2106.05(f)(3). The Applicant’s argument regarding “a human cannot mentally implement an action for an ego vehicle.” The step of the claims that recited “implementing, via the vehicle system and the ego-vehicle, the action generated by the processor.” This is an example of merely invoking a computer to implement an abstract idea. The claims must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f)(2). It is also noted that “Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer.” See MPEP 2106.04(III) – Mental Processes. The “ego-vehicle” is being used as a tool to implement the abstract idea. See MPEP § 2106.05(f)(2). The Applicant’s argument regarding “the claims improve the technical field of predictions by enabling generation of predictions using causal graphs indicative of causal relationships between two or more of the ego-vehicle and one or more of the agents.” The generated predictions are too broad and do not contain any detail on how they are produced. They are generic and do not show any improvement in calculating predictions for ego vehicles. See MPEP See MPEP § 2106.05(f)(1) – “Whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished.”. The Applicant’s argument regarding Step 2B is very vague. Although the Applicant is making an argument regarding the prior art used in the rejection under 35 U.S.C. § 103 (obviousness), it is noted that the Examiner does not rely on those prior arts to reject the claims under 35 U.S.C. § 101 as directed to an abstract idea (mental process). For the above reasons, the applicant's argument is not persuasive. Therefore, the rejection of claims 1-3, 6-8, 10-11, 14-17, and 20 under 35 U.S.C. § 101 as directed to an abstract idea (mental process) is hereby upheld. Claim Rejections - 35 USC § 112 5. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 17 recites the limitation “generating an action for the ego-vehicle based on the intention prediction or the trajectory prediction for each participant within the operating environment.” In lines 15-16. There is insufficient antecedent basis for these limitation elements “the intention prediction or the trajectory prediction” in the claim. These limitation elements should be recited as an intention prediction or a trajectory prediction when they are newly introduced into the claim. Claim 20 is rejected for incorporating the deficiencies of parent claim 17. Claim Rejections - 35 USC § 101 6. 35 U.S.C. §101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 6-8, 10-11, 14-17, and 20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to an abstract idea (Mental Process) without significantly more. The claims similarly recite steps to manage actions for an ego-vehicle based on the prediction for each participant within an operating environment. The following is an analysis based on 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG). Step 1, Statutory Category? Claims 1-3, 6-8, 10, 17 an d20 are directed to a system. Claims 1 and 14-16 are directed to a method. Therefore, claims 1-3, 6-8, 10-11, 14-17 and 20 fall into at least one of the four statutory categories. Step 2A, Prong I: Judicial Exception Recited? The examiner submits that the foregoing claim limitations constitute a “Mental Process”, as the claims cover performance of the limitations in the human mind, given the broadest reasonable interpretation. As per claims 1, 11 and 17, the claims similarly recite the limitations of: “generating an acyclic causal graph comprising a plurality of nodes and edges connecting respective two nodes of the plurality of nodes, wherein each node indicates one of participants within an operating environment including the ego-vehicle, one or more agents, and one or more potential obstacles, and each edge is indicative of causal relationships between two or more of the ego-vehicle and one or more of the agents within the operating environment;” However, a human can mentally analyze information can created a cyclic causal graph. A cyclic causal graph is a type of graph that shows causal relationships between variables, where there's a loop or cycle. Humans can mentally visualize simple graphs with a few nodes and edges. The role of participants herein is simply labeling the nodes of the graph, which is used as a mere element to implement the mental process of creating the graph. The causal relationships herein simply identify the connection between one node and another, which is used as a mere element to implement the mental process of creating the graph. Besides that, human can observe a graph and define relationships between its elements based on simple observations and judgments. There is nothing so complex in the limitation that could not be doing in the human mind. “generating a prediction for each participant within the operating environment based on a topological sort of the plurality of nodes of the acyclic causal graph;” A humans can also observe data organized in a topological rank and use the observed data to mentally calculate a prediction. The topological sort of the plurality of nodes and edges of the acyclic causal graph is a mere element being used to implement the abstract idea herein. There is nothing so complex in the limitation that could not be doing in the human mind. “generating an action for the ego-vehicle based on the prediction for each participant within the operating environment, wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment;” A human can generate actions based on previously calculated predictions. A humans can observe the calculated prediction and make a judgment using their minds, thereby defining an appropriate action. There is nothing so complex in the limitation that could not be doing in the human mind. As per dependent claim 2, the claim recites the limitation of: “wherein one or more of the agents is another vehicle, a bicycle, or a motorcycle.” The one or more of the agents recited above are merely components used for the mental steps recited in claim 1. As per dependent claim 3, the claim recites the limitation of: “wherein one or more of the potential obstacles is another vehicle, a bicycle, a motorcycle, a parked vehicle, a traffic sign, a pedestrian, an intersection, or a road feature.” The one or more of the potential obstacles recited above are merely components used for the mental steps recited in claim 1. As per dependent claims 6, 14 and 20, the claims recite the limitation of: “wherein the causal relationship is a leader-follower relationship, a trajectory-dependency relationship, or a collision relationship.” Humans are able to define the relationships between elements in a causal graph through observation and judgment. A human can also calculate relationships between elements in a causal graph by using a pen and paper, see MPEP 2106.04(a)(2)(III). There is nothing so complex in the limitation that could not be doing in the human mind. As per dependent claims 7 and 15, the claims recite the limitation of: “wherein the prediction for each participant is an intention prediction or a trajectory prediction.” A human can generate a prediction for each participant(s) illustrate in a causal graph by using a pen and paper, see MPEP 2106.04(a)(2)(III). Humans can also calculate intention predictions or trajectory predictions for a participant represented in a causal graph by observing and making judgments about the information associated with each element. There is nothing so complex in the limitation that could not be doing in the human mind. As per dependent claims 8 and 16, the claims recite the limitation of: “wherein the generating the prediction for each participant within the operating environment is based on a topological sort of the causal graph.” The topological sort of the causal graph recited above are merely components used for the mental steps recited in claims 1 and 11. As per dependent claim 10, the claim recites the limitation of: “wherein the action includes a driving maneuver.” A human can decide on an action to take while operating a vehicle based on observations and judgments. Humans can make decisions about vehicle operations by observing and judging the situation. For example, the driving maneuver mentioned herein could be turning around the block. There is nothing so complex in the limitation that could not be doing in the human mind. Accordingly, claims 1-3, 6-11, 14-17 and 20 recite at least one abstract idea. Step 2A, Prong II: Integrated into a Practical Application? The claims recite the following additional limitations/elements: As per claims 1, 11 and 17, the claims recite the additional elements of: “an ego-vehicle; and a vehicle system.” This element is example of mere instruction to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (see MPEP § 2106.05(f)). The ego-vehicle herein is nothing more than a device. The vehicle system is nothing more than logical procedures used as a tool in the ego-vehicle, which can be considered a device. Specifically, the additional elements of the limitations invoke computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) do not provide improvements to the functioning of a computer or to any other technology or technical field; and do not integrate a judicial exception into a practical application. As per claims 1, 11 and 17, the claims recite the additional limitation of: “implementing, via the vehicle system and the ago-vehicle, the action generated by the processor.” This limitation is example of mere instruction to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (see MPEP § 2106.05(f)). Specifically, the additional elements of the limitations invoke computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) do not provide improvements to the functioning of a computer or to any other technology or technical field; and do not integrate a judicial exception into a practical application. As per claims 1 and 17, the claims recite the additional elements of: “a memory storing one or more instructions; and a processor executing one or more of the instructions stored on the memory.” This element is example of mere instruction to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea (see MPEP § 2106.05(f)). Specifically, the additional elements of the limitations invoke computers or other machinery merely as a tool to perform an existing process. Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general-purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) do not provide improvements to the functioning of a computer or to any other technology or technical field; and do not integrate a judicial exception into a practical application. As per claims 1, 11 and 17, the claims recite the additional limitation of: “wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment;” The warning element is merely an instruction used to implement the abstract idea. A generically claimed “warning” exemplifies a claim limitation with broad applicability across many fields of endeavor; such claims often fail to provide the meaningful limitations necessary to integrate a judicial exception into a practical application or to amount to significantly more. See MPEP See MPEP 2106.05(f)(3) Therefore, claims 1-3, 6-8, 10-11, 14-17 and 20 do not integrate the recited abstract ideas into a practical application. Step 2B: Claim provides an Inventive Concept? When considered individually or in combination, the additional limitations/elements of claims 1-3, 6-8, 10-11, 14-17 and 20 do not amount to significantly more than the judicial exception for the same reasons discussed above as to why do not make improvements to the functioning of a computer or to any other technology or technical field; and do not integrate a judicial exception into a practical application. The additional limitations/elements outlined in Step 2A performing functions as designed to merely accomplish execution of the abstract ideas. Although the conclusion of whether a claim is eligible at Step 2B requires that all relevant considerations be evaluated, most of these considerations were already evaluated in Step 2A Prong Two. Thus, in Step 2B, examiners should: • Carry over their identification of the additional element(s) in the claim from Step 2A Prong Two; • Carry over their conclusions from Step 2A Prong Two on the considerations discussed in MPEP §§ 2106.05(a) - (c), (e) (f) and (h): • Re-evaluate any additional element or combination of elements that was considered to be insignificant extra-solution activity per MPEP § 2106.05(g), because if such re-evaluation finds that the element is unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, this finding may indicate that the additional element is no longer considered to be insignificant; and • Evaluate whether any additional element or combination of elements are other than what is well-understood, routine, conventional activity in the field, or simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP § 2106.05(d). In conclusions the limitations/elements reciting generic computer components as mere instructions to apply on a computer per MPEP 2106.05(f) are carried over and do not provide amount to significantly more than the judicial exception. Looking at the limitations in combination and the claim as a whole does not change this conclusion and the claims are ineligible. Therefore, the claims 1-3, 6-8, 10-11, 14-17 and 20 are not patent eligible. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. § 102 and § 103 (or as subject to pre-AIA 35 U.S.C. § 102 and § 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section § 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. § 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 8. Claims 1-3, 6-8, 10-11, 14-17 and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Agarwal et al. (US 20220144303 A1) in view of Nichols (US 20210208591 A1) in Moustafa et al. (US 20220126878 A1). As per claim 1, Agarwal teaches a system for causal graph chain reasoning predictions (i.e. “a system for driver behavior risk assessment and pedestrian awareness … a situation predictor generating a prediction of a situation based on the scene representation and the intention of the ego vehicle, and a driver response determiner generating an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.”; para. [0003], [0054]), comprising: an ego-vehicle (i.e. “an ego vehicle approaches an intersection, an ego intention may be fixed, what obstacles are in the path, and what influences from traffic agents are defined.”; para. [0029]) including a vehicle system (i.e. “A “vehicle system”, as used herein, may be any automatic or manual systems that may be used to enhance the vehicle, and driving.”; para. [0026]); a memory storing one or more instructions (i.e. “The memory may store an operating system that controls or allocates resources of a computing device.”; figs. 6-7, para. [0018]-[0020]); and a processor executing one or more of the instructions stored on the memory to perform (i.e. “computer-readable device including processor-executable instructions configured to embody one or more of the provisions set forth herein, according to one aspect.”; figs. 6-7, para. [0014], [0017]): generating a prediction for each participant within the operating environment (i.e. “generating 508 a prediction of a situation (e.g., a stop sign, a traffic light, a crossing pedestrian, a crossing vehicle, a vehicle blocking ego lane, a congestion, a jaywalking, a vehicle backing into parking space, a vehicle on shoulder open door, or a cut-in) based on the scene representation and the intention of the ego vehicle”; fig. 5, para. [0062]-[0063]; Examiner note: the generating a prediction for each participant within the operating environment is interpreted as the generating the prediction of the situation; where each situation is a participant of the scene representation); generating an action for the ego-vehicle based on the prediction for each participant within the operating environment (i.e. “generating 510 an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.”; fig. 5, para. [0062]-[0063]; Examiner note: the generating an action for the ego-vehicle based on the prediction for each participant within the operating environment is interpreted as the generating the influenced or non-influenced action); and implementing, via the vehicle system and the ago-vehicle, the action generated by the processor (i.e. “The driver response determiner 160 may generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation and a risk score for one or more of the objects, traffic participants, or environment features based on the influenced or non-influenced action determination, the prediction of the situation, and/or the scene representation.”; fig. 5, para. [0049]-[0050], [0062]-[0063]. Further. i.e. “the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard programming or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.”; fig. 7, para. [0066]-[0067]). Although Agarwal discloses an ego-vehicle, one or more agents and one or more potential obstacles, see Agarwal fig. 4 and para. [0006], [0025], [0029] and topological arrangement, see Agarwal fig. 5, para. [0062]-[0063]. However, it is noted that the prior art of Agarwal does not explicitly teach “generating an acyclic causal graph comprising a plurality of nodes and edges connecting respective two nodes of the plurality of nodes, wherein each node indicates one of participants within an operating environment including the ego-vehicle, one or more agents, and one or more potential obstacles, and each edge is indicative of causal relationships between two or more of the ego-vehicle and one or more of the agents within the operating environment, wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment; based on a topological sort of the plurality of nodes and edges of the acyclic causal graph;” On the other hand, in the same field of endeavor, Nichols teaches generating an acyclic causal graph comprising a plurality of nodes and edges connecting respective two nodes of the plurality of nodes (i.e. “The system will order the nodes by depth in a sequence, and it will build a graph-based program specification that includes the nodes in the sequence, along with the connections. The graph-based program specification may correspond to a directed acyclic graph (DAG).”; fig.1, para. [0004]. Further, “a DAG is a graph 100 with multiple nodes 101 (which also may be called vertices) and connections 102 between pairs of nodes (which connections also may be called edges or arcs).”; fig. 1, para. [0016], [0021]), wherein each node indicates one of participants within an operating environment including the ego-vehicle (i.e. “each node may represent a property of an object that one or more sensors of the autonomous vehicle may encounter in an environment.”; fig. 1, para. [0006], [0041], [0051]; Examiner notes: the one of the participants is interpreted as the object; the ego-vehicle is disclosed by Agarwal para. [0029], see above), one or more agents (i.e. “Example object properties that the nodes may represent may include, for an object that is a vehicle, a first node indicating whether the vehicle is parked.”; figs. 1-2, para. [0007], [0042]; Examiner note: the one or more agents is interpreted as the vehicle), and one or more potential obstacles (i.e. “for an actor that is a pedestrian”; figs. 1-2, para. [0008], [0041], [0043]; Examiner notes: The one or more potential obstacles is interpreted as the pedestrian), and each edge is indicative of causal relationships between two or more of the ego-vehicle and one or more of the agents within the operating environment (i.e. “AV prediction systems may construct and use a DAG to determine the properties of an object, in proper order, at runtime when the AV's sensors detect the object in the environment. A simple example is shown in FIG. 2, which shows various object properties as nodes, and which also shows relationships between properties as connections.”; fig.1& 2, para. [0016]-[0017], [0037], [0043]; Examiner note: the each edge is indicative of causal relationships is interpreted as the relationships between properties as connections. The two or more of the ego-vehicle is interpreted as the AV's sensors. As illustrated in Figure 1, the nodes in the graph follow a relationship of lead and follower); based on a topological sort of the plurality of nodes and edges of the acyclic causal graph (i.e. “Each connection 102 is directed from one node 101 to another, so that a path that follows a sequence of connections away from any node will never lead back to that node. Thus, a directed graph exhibits any number of topological orderings that an AV may use to determine various properties of a detected object.”; figs. 1-3, para. [0015]-[0017], [0021]); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Nichols that teaches nodes represent object properties such as properties of objects that an autonomous vehicle (AV) detects while moving about an environment into Agarwal that teaches a system for driver behavior risk assessment and pedestrian awareness. Additionally, this can generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation. The motivation for doing so would be to use an acyclic graph (DAG) to represent how an autonomous vehicle (AV) moves in an environment, because it can facilitate predicting the trajectory or other actions that other detected vehicles or actors may take (Nichols, Abstract, para. [0001]). However, it is noted that the combination of the prior arts of Agarwal and Nichols do not explicitly teach “wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment;” On the other hand, in the same field of endeavor, Moustafa teaches wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment (i.e. “Vehicle Requested Take-over: When the vehicle requests the driver to takeover and pass from autonomous mode to human-driven mode. This may happen, in some cases, when the autonomous vehicle faces a new situation for its perception system, such as when there is some uncertainty of the best decision, or when the vehicle is coming out of a geo-fenced region. The general approach for requesting human takeover is through warning the driver through one or more ways (e.g., messages popping-up in the dash board, beeps, or vibrations in steering wheel).”; para. [0165]. Further, i.e. “displays of the autonomous vehicle may present warnings or instructions to in-vehicle passengers regarding an upcoming, predicted issue and the possibility of a pull-over and/or remote valet handover.”; para. [0146], [0090]; Examiner note: the warning including the explanation of the causal chain associated with two or more participants within the operating environment is interpreted as the warnings or instructions to in-vehicle passengers regarding an upcoming, predicted issue and the possibility of a pull-over and/or remote valet handover; where warnings or instructions is known to have narratives/explanations); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Moustafa that teaches autonomous vehicles into the combination of the prior arts of Agarwal that teaches a system for driver behavior risk assessment and pedestrian awareness, and Nichols that teaches nodes represent object properties such as properties of objects that an autonomous vehicle (AV) detects while moving about an environment. Additionally, this can generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation. The motivation for doing so would be to integrate infrastructure-based devices separate from the vehicle’s own sensors and computing systems because it can support and improve autonomous driving performance (Moustafa, para. [0044], [0063], [0067]). As per claim 2, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 1 above. Additionally, Agarwal teaches wherein one or more of the agents is another vehicle, a bicycle, or a motorcycle (i.e. “the system 100 or 200 may limit the objects of interest to the following classes: person, bicycle, car, motorcycle, bus, truck, traffic light, and stop sign.”; para. [0046], [0054]; Examiner note: the agents are interpreted as the objects of interest herein, see fig. 4 where a car is illustrated as a traffic agent. Further, i.e. “The term “vehicle” includes cars, trucks, vans, minivans, SUVs, motorcycles, scooters, boats, personal watercraft, and aircraft.”; fig. 4, para. [0025]-[0026]). As per claim 3, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 1 above. Additionally, Agarwal teaches wherein one or more of the potential obstacles is another vehicle, a bicycle, a motorcycle, a parked vehicle, a traffic sign, a pedestrian, an intersection, or a road feature (i.e. “Examples of different types of situations may include a stop sign, a traffic light, a crossing pedestrian, a crossing vehicle, a vehicle blocking ego lane, a congestion, a jaywalking, a vehicle backing into parking space, a vehicle on shoulder open door, or a cut-in, etc.”; fig.4, para. [0028], [0037]-[0039], [0055]; Examiner note: the potential obstacles herein are interpreted as the situations). As per claim 6, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 1 above. Additionally, Agarwal teaches wherein the causal relationship is a leader-follower relationship, a trajectory-dependency relationship, or a collision relationship (i.e. “a framework for risk object identification may be provided which models the relationship between driver intention (e.g., where does the driver wish to go?), situation (e.g., reasoning, surroundings, position of traffic participants, directions traffic participants are moving, interaction between ego-vehicle and traffic participants, influence based on other traffic participants, etc.), and the driver response (e.g., continue, stop, slow, turn, etc.).”; fig. 3, para. [0032], [0051]; Examiner note: the trajectory-dependency relationship is incepted as the relationship between driver intention (e.g., where does the driver wish to go?)). As per claim 7, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 1 above. Additionally, Agarwal teaches wherein the prediction for each participant is an intention prediction or a trajectory prediction (i.e. “The model updater 156 may calculate an updated probability of a successful interaction between the identified traffic participant and the autonomous vehicle based on the intention prediction associated with the identified traffic participant and the intention prediction associated with the autonomous vehicle.”; fig. 1, para. [0047]-[0048], [0052], [0054]; Examiner note: the intention prediction is interpreted a the intention prediction associated with the autonomous vehicle). As per claim 8, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 1 above. Additionally, Agarwal teaches wherein the generating the prediction for each participant within the operating environment is based on a topological sort of the causal graph (i.e. “The method for driver behavior risk assessment and pedestrian awareness may include receiving 502 an input stream of images of an environment (e.g., a straight topology, a three-way intersection topology, or a four-way intersection topology) including one or more objects within the environment …”; fig. 5, para. [0028], [0062]; Examiner note: the topological sort of the causal graph is interpreted as the input stream of images of the environment). As per claim 10, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 1 above. Additionally, Agarwal teaches wherein the action includes a driving maneuver (i.e. “the system 100 of FIG. 1 or the system 200 of FIG. 2 may model the causal relationship between the driver intention, situation, and decision of driver maneuver.”; para. [0026], [0028], [0047], [0051]; Examiner note: the driving maneuver is interpreted as the decision of driver maneuver). As per claim 11, Agarwal teaches a computer-implemented method (i.e. “Non-transitory computer-readable storage media may include volatile and non-volatile, removable and non-removable media implemented in any method”; para. [0027]) for causal graph chain reasoning predictions (i.e. “a system for driver behavior risk assessment and pedestrian awareness … a situation predictor generating a prediction of a situation based on the scene representation and the intention of the ego vehicle, and a driver response determiner generating an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.”; para. [0003], [0054]), comprising: generating a prediction for each participant within the operating environment (i.e. “generating 508 a prediction of a situation (e.g., a stop sign, a traffic light, a crossing pedestrian, a crossing vehicle, a vehicle blocking ego lane, a congestion, a jaywalking, a vehicle backing into parking space, a vehicle on shoulder open door, or a cut-in) based on the scene representation and the intention of the ego vehicle”; fig. 5, para. [0062]-[0063]; Examiner note: the generating a prediction for each participant within the operating environment is interpreted as the generating the prediction of the situation; where each situation is a participant of the scene representation); generating an action for the ego-vehicle based on the prediction for each participant within the operating environment (i.e. “generating 510 an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.”; fig. 5, para. [0062]-[0063]; Examiner note: the generating an action for the ego-vehicle based on the prediction for each participant within the operating environment is interpreted as the generating the influenced or non-influenced action); and implementing, via the vehicle system and the ago-vehicle, the action generated (i.e. “The driver response determiner 160 may generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation and a risk score for one or more of the objects, traffic participants, or environment features based on the influenced or non-influenced action determination, the prediction of the situation, and/or the scene representation.”; fig. 5, para. [0049]-[0050], [0062]-[0063]. Further. i.e. “the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard programming or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.”; fig. 7, para. [0066]-[0067]). Although Agarwal discloses an ego-vehicle, one or more agents and one or more potential obstacles, see Agarwal fig. 4 and para. [0006], [0025], [0029] and topological arrangement, see Agarwal fig. 5, para. [0062]-[0063]. However, it is noted that the prior art of Agarwal does not explicitly teach “generating an acyclic causal graph comprising a plurality of nodes and edges connecting respective two nodes of the plurality of nodes, wherein each node indicates one of participants within an operating environment including an ego-vehicle, one or more agents, and one or more potential obstacles, and each edge is indicative of causal relationships between two or more of the ego-vehicle and one or more of the agents within the operating environment, wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment; based on a topological sort of the plurality of nodes and edges of the acyclic causal graph;” On the other hand, in the same field of endeavor, Nichols teaches generating an acyclic causal graph comprising a plurality of nodes and edges connecting respective two nodes of the plurality of nodes (i.e. “The system will order the nodes by depth in a sequence, and it will build a graph-based program specification that includes the nodes in the sequence, along with the connections. The graph-based program specification may correspond to a directed acyclic graph (DAG).”; fig.1, para. [0004]. Further, “a DAG is a graph 100 with multiple nodes 101 (which also may be called vertices) and connections 102 between pairs of nodes (which connections also may be called edges or arcs).”; fig. 1, para. [0016], [0021]), wherein each node indicates one of participants within an operating environment including the ego-vehicle (i.e. “each node may represent a property of an object that one or more sensors of the autonomous vehicle may encounter in an environment.”; fig. 1, para. [0006], [0041], [0051]; Examiner notes: the one of the participants is interpreted as the object; the ego-vehicle is disclosed by Agarwal para. [0029], see above), one or more agents (i.e. “Example object properties that the nodes may represent may include, for an object that is a vehicle, a first node indicating whether the vehicle is parked.”; figs. 1-2, para. [0007], [0042]; Examiner note: the one or more agents is interpreted as the vehicle), and one or more potential obstacles (i.e. “for an actor that is a pedestrian”; figs. 1-2, para. [0008], [0041], [0043]; Examiner notes: The one or more potential obstacles is interpreted as the pedestrian), and each edge is indicative of causal relationships between two or more of the ego-vehicle and one or more of the agents within the operating environment (i.e. “AV prediction systems may construct and use a DAG to determine the properties of an object, in proper order, at runtime when the AV's sensors detect the object in the environment. A simple example is shown in FIG. 2, which shows various object properties as nodes, and which also shows relationships between properties as connections.”; fig.1& 2, para. [0016]-[0017], [0037], [0043]; Examiner note: the each edge is indicative of causal relationships is interpreted as the relationships between properties as connections. The two or more of the ego-vehicle is interpreted as the AV's sensors. As illustrated in Figure 1, the nodes in the graph follow a relationship of lead and follower); based on a topological sort of the plurality of nodes and edges of the acyclic causal graph (i.e. “Each connection 102 is directed from one node 101 to another, so that a path that follows a sequence of connections away from any node will never lead back to that node. Thus, a directed graph exhibits any number of topological orderings that an AV may use to determine various properties of a detected object.”; figs. 1-3, para. [0015]-[0017], [0021]); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Nichols that teaches nodes represent object properties such as properties of objects that an autonomous vehicle (AV) detects while moving about an environment into Agarwal that teaches a system for driver behavior risk assessment and pedestrian awareness. Additionally, this can generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation. The motivation for doing so would be to use an acyclic graph (DAG) to represent how an autonomous vehicle (AV) moves in an environment, because it can facilitate predicting the trajectory or other actions that other detected vehicles or actors may take (Nichols, Abstract, para. [0001]). However, it is noted that the combination of the prior arts of Agarwal and Nichols do not explicitly teach “wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment;” On the other hand, in the same field of endeavor, Moustafa teaches wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment (i.e. “Vehicle Requested Take-over: When the vehicle requests the driver to takeover and pass from autonomous mode to human-driven mode. This may happen, in some cases, when the autonomous vehicle faces a new situation for its perception system, such as when there is some uncertainty of the best decision, or when the vehicle is coming out of a geo-fenced region. The general approach for requesting human takeover is through warning the driver through one or more ways (e.g., messages popping-up in the dash board, beeps, or vibrations in steering wheel).”; para. [0165]. Further, i.e. “displays of the autonomous vehicle may present warnings or instructions to in-vehicle passengers regarding an upcoming, predicted issue and the possibility of a pull-over and/or remote valet handover.”; para. [0146], [0090]; Examiner note: the warning including the explanation of the causal chain associated with two or more participants within the operating environment is interpreted as the warnings or instructions to in-vehicle passengers regarding an upcoming, predicted issue and the possibility of a pull-over and/or remote valet handover; where warnings or instructions is known to have narratives/explanations); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Moustafa that teaches autonomous vehicles into the combination of the prior arts of Agarwal that teaches a system for driver behavior risk assessment and pedestrian awareness, and Nichols that teaches nodes represent object properties such as properties of objects that an autonomous vehicle (AV) detects while moving about an environment. Additionally, this can generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation. The motivation for doing so would be to integrate infrastructure-based devices separate from the vehicle’s own sensors and computing systems because it can support and improve autonomous driving performance (Moustafa, para. [0044], [0063], [0067]). As per claim 14, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 11 above. Additionally, Agarwal teaches wherein the causal relationship is a leader-follower relationship, a trajectory-dependency relationship, or a collision relationship (i.e. “a framework for risk object identification may be provided which models the relationship between driver intention (e.g., where does the driver wish to go?), situation (e.g., reasoning, surroundings, position of traffic participants, directions traffic participants are moving, interaction between ego-vehicle and traffic participants, influence based on other traffic participants, etc.), and the driver response (e.g., continue, stop, slow, turn, etc.).”; fig. 3, para. [0032], [0051]; Examiner note: the trajectory-dependency relationship is incepted as the relationship between driver intention (e.g., where does the driver wish to go?)). As per claim 15, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 11 above. Additionally, Agarwal teaches wherein the prediction for each participant is an intention prediction or a trajectory prediction (i.e. “The model updater 156 may calculate an updated probability of a successful interaction between the identified traffic participant and the autonomous vehicle based on the intention prediction associated with the identified traffic participant and the intention prediction associated with the autonomous vehicle.”; fig. 1, para. [0047]-[0048], [0052], [0054]; Examiner note: the intention prediction is interpreted as the intention prediction associated with the autonomous vehicle). As per claim 16, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 11 above. Additionally, Agarwal teaches wherein the generating the prediction for each participant within the operating environment is based on a topological sort of the causal graph (i.e. “The method for driver behavior risk assessment and pedestrian awareness may include receiving 502 an input stream of images of an environment (e.g., a straight topology, a three-way intersection topology, or a four-way intersection topology) including one or more objects within the environment …”; fig. 5, para. [0028], [0062]; Examiner note: the topological sort of the causal graph is interpreted as the input stream of images of the environment). As per claim 17, Agarwal teaches a system for causal graph chain reasoning predictions (i.e. “a system for driver behavior risk assessment and pedestrian awareness … a situation predictor generating a prediction of a situation based on the scene representation and the intention of the ego vehicle, and a driver response determiner generating an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.”; para. [0003], [0054]), comprising: an ego-vehicle (i.e. “an ego vehicle approaches an intersection, an ego intention may be fixed, what obstacles are in the path, and what influences from traffic agents are defined.”; para. [0029]) including a vehicle system (i.e. “A “vehicle system”, as used herein, may be any automatic or manual systems that may be used to enhance the vehicle, and driving.”; para. [0026]); a memory storing one or more instructions (i.e. “The memory may store an operating system that controls or allocates resources of a computing device.”; figs. 6-7, para. [0018]-[0020]); and a processor executing one or more of the instructions stored on the memory to perform (i.e. “computer-readable device including processor-executable instructions configured to embody one or more of the provisions set forth herein, according to one aspect.”; figs. 6-7, para. [0014], [0017]): generating a prediction for each participant within the operating environment (i.e. “generating 508 a prediction of a situation (e.g., a stop sign, a traffic light, a crossing pedestrian, a crossing vehicle, a vehicle blocking ego lane, a congestion, a jaywalking, a vehicle backing into parking space, a vehicle on shoulder open door, or a cut-in) based on the scene representation and the intention of the ego vehicle”; fig. 5, para. [0062]-[0063]; Examiner note: the generating a prediction for each participant within the operating environment is interpreted as the generating the prediction of the situation; where each situation is a participant of the scene representation); generating an action for the ego-vehicle based on the prediction for each participant within the operating environment (i.e. “generating 510 an influenced or non-influenced action determination based on the prediction of the situation and the scene representation.”; fig. 5, para. [0062]-[0063]; Examiner note: the generating an action for the ego-vehicle based on the prediction for each participant within the operating environment is interpreted as the generating the influenced or non-influenced action); and implementing, via the vehicle system and the ago-vehicle, the action generated by the processor (i.e. “The driver response determiner 160 may generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation and a risk score for one or more of the objects, traffic participants, or environment features based on the influenced or non-influenced action determination, the prediction of the situation, and/or the scene representation.”; fig. 5, para. [0049]-[0050], [0062]-[0063]. Further. i.e. “the claimed subject matter is implemented as a method, apparatus, or article of manufacture using standard programming or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.”; fig. 7, para. [0066]-[0067]). Although Agarwal discloses an ego-vehicle, one or more agents and one or more potential obstacles, see Agarwal fig. 4 and para. [0006], [0025], [0029] and topological arrangement, see Agarwal fig. 5, para. [0062]-[0063]. However, it is noted that the prior art of Agarwal does not explicitly teach “generating an acyclic causal graph comprising a plurality of nodes and edges connecting respective two nodes of the plurality of nodes, wherein each node indicates one of participants within an operating environment including the ego-vehicle, one or more agents, and one or more potential obstacles, and each edge is indicative of causal relationships between two or more of the ego-vehicle and one or more of the agents within the operating environment, wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment; based on a topological sort of the plurality of nodes and edges of the acyclic causal graph;” On the other hand, in the same field of endeavor, Nichols teaches generating an acyclic causal graph comprising a plurality of nodes and edges connecting respective two nodes of the plurality of nodes (i.e. “The system will order the nodes by depth in a sequence, and it will build a graph-based program specification that includes the nodes in the sequence, along with the connections. The graph-based program specification may correspond to a directed acyclic graph (DAG).”; fig.1, para. [0004]. Further, “a DAG is a graph 100 with multiple nodes 101 (which also may be called vertices) and connections 102 between pairs of nodes (which connections also may be called edges or arcs).”; fig. 1, para. [0016], [0021]), wherein each node indicates one of participants within an operating environment including the ego-vehicle (i.e. “each node may represent a property of an object that one or more sensors of the autonomous vehicle may encounter in an environment.”; fig. 1, para. [0006], [0041], [0051]; Examiner notes: the one of the participants is interpreted as the object; the ego-vehicle is disclosed by Agarwal para. [0029], see above), one or more agents (i.e. “Example object properties that the nodes may represent may include, for an object that is a vehicle, a first node indicating whether the vehicle is parked.”; figs. 1-2, para. [0007], [0042]; Examiner note: the one or more agents is interpreted as the vehicle), and one or more potential obstacles (i.e. “for an actor that is a pedestrian”; figs. 1-2, para. [0008], [0041], [0043]; Examiner notes: The one or more potential obstacles is interpreted as the pedestrian), and each edge is indicative of causal relationships between two or more of the ego-vehicle and one or more of the agents within the operating environment (i.e. “AV prediction systems may construct and use a DAG to determine the properties of an object, in proper order, at runtime when the AV's sensors detect the object in the environment. A simple example is shown in FIG. 2, which shows various object properties as nodes, and which also shows relationships between properties as connections.”; fig.1& 2, para. [0016]-[0017], [0037], [0043]; Examiner note: the each edge is indicative of causal relationships is interpreted as the relationships between properties as connections. The two or more of the ego-vehicle is interpreted as the AV's sensors. As illustrated in Figure 1, the nodes in the graph follow a relationship of lead and follower); based on a topological sort of the plurality of nodes and edges of the acyclic causal graph (i.e. “Each connection 102 is directed from one node 101 to another, so that a path that follows a sequence of connections away from any node will never lead back to that node. Thus, a directed graph exhibits any number of topological orderings that an AV may use to determine various properties of a detected object.”; figs. 1-3, para. [0015]-[0017], [0021]); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Nichols that teaches nodes represent object properties such as properties of objects that an autonomous vehicle (AV) detects while moving about an environment into Agarwal that teaches a system for driver behavior risk assessment and pedestrian awareness. Additionally, this can generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation. The motivation for doing so would be to use an acyclic graph (DAG) to represent how an autonomous vehicle (AV) moves in an environment, because it can facilitate predicting the trajectory or other actions that other detected vehicles or actors may take (Nichols, Abstract, para. [0001]). However, it is noted that the combination of the prior arts of Agarwal and Nichols do not explicitly teach “wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment;” On the other hand, in the same field of endeavor, Moustafa teaches wherein the action includes a warning including an explanation of a causal chain associated with two or more participants within the operating environment (i.e. “Vehicle Requested Take-over: When the vehicle requests the driver to takeover and pass from autonomous mode to human-driven mode. This may happen, in some cases, when the autonomous vehicle faces a new situation for its perception system, such as when there is some uncertainty of the best decision, or when the vehicle is coming out of a geo-fenced region. The general approach for requesting human takeover is through warning the driver through one or more ways (e.g., messages popping-up in the dash board, beeps, or vibrations in steering wheel).”; para. [0165]. Further, i.e. “displays of the autonomous vehicle may present warnings or instructions to in-vehicle passengers regarding an upcoming, predicted issue and the possibility of a pull-over and/or remote valet handover.”; para. [0146], [0090]; Examiner note: the warning including the explanation of the causal chain associated with two or more participants within the operating environment is interpreted as the warnings or instructions to in-vehicle passengers regarding an upcoming, predicted issue and the possibility of a pull-over and/or remote valet handover; where warnings or instructions is known to have narratives/explanations); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Moustafa that teaches autonomous vehicles into the combination of the prior arts of Agarwal that teaches a system for driver behavior risk assessment and pedestrian awareness, and Nichols that teaches nodes represent object properties such as properties of objects that an autonomous vehicle (AV) detects while moving about an environment. Additionally, this can generate an influenced or non-influenced action determination based on the prediction of the situation and the scene representation. The motivation for doing so would be to integrate infrastructure-based devices separate from the vehicle’s own sensors and computing systems because it can support and improve autonomous driving performance (Moustafa, para. [0044], [0063], [0067]). As per claim 20, Agarwal, Nichols and Moustafa teach all the limitations as discussed in claim 17 above. Additionally, Agarwal teaches wherein the causal relationship is a leader-follower relationship, a trajectory-dependency relationship, or a collision relationship (i.e. “a framework for risk object identification may be provided which models the relationship between driver intention (e.g., where does the driver wish to go?), situation (e.g., reasoning, surroundings, position of traffic participants, directions traffic participants are moving, interaction between ego-vehicle and traffic participants, influence based on other traffic participants, etc.), and the driver response (e.g., continue, stop, slow, turn, etc.).”; fig. 3, para. [0032], [0051]; Examiner note: the trajectory-dependency relationship is incepted as the relationship between driver intention (e.g., where does the driver wish to go?)). Prior Art of Record 9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ivascu et al. (US 20240331175 A1), teaches determining vehicle positioning, and in particular relate to determining vehicle following distance. Gupta et al. (US 20240326873 A1), teaches identified and utilized by various computing devices, such as a smartphone or a computer located on and/or off the transport. Merchant et al. (US 20240296700 A1), teaches monitoring, regulating, and automating payments for curbside parking as well as providing parking and traffic data to drivers and third parties. Malla et al. (US 20210129871 A1), teaches providing social-stage spatio-temporal multi-modal future forecasting that include receiving environment data associated with a surrounding environment of an ego vehicle and implementing graph convolutions to obtain attention weights that are respectively associated with agents that are located within the surrounding environment. Horii et al. (US 20190299991 A1), teaches automatically by automated driving. Conclusion 10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTONIO CAIA DO whose telephone number is (469)295-9251. The examiner can normally be reached on Monday - Friday / 06:30 to 16:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ng, Amy can be reached on (571) 270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANTONIO J CAIA DO/ Examiner, Art Unit 2164
Read full office action

Prosecution Timeline

May 24, 2023
Application Filed
Jul 08, 2024
Non-Final Rejection — §101, §103, §112
Aug 29, 2024
Response Filed
Oct 25, 2024
Final Rejection — §101, §103, §112
Jan 06, 2025
Response after Non-Final Action
Feb 04, 2025
Request for Continued Examination
Feb 09, 2025
Response after Non-Final Action
Jul 23, 2025
Non-Final Rejection — §101, §103, §112
Oct 08, 2025
Response Filed
Oct 19, 2025
Final Rejection — §101, §103, §112
Dec 23, 2025
Response after Non-Final Action
Jan 22, 2026
Request for Continued Examination
Jan 29, 2026
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597055
IDENTIFYING ITEMS OFFERED BY AN ONLINE CONCIERGE SYSTEM FOR A RECEIVED QUERY BASED ON A GRAPH IDENTIFYING RELATIONSHIPS BETWEEN ITEMS AND ATTRIBUTES OF THE ITEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12579121
MANAGEMENT OF A SECONDARY VERTEX INDEX FOR A GRAPH
2y 5m to grant Granted Mar 17, 2026
Patent 12579129
System and Method for Processing Hierarchical Data
2y 5m to grant Granted Mar 17, 2026
Patent 12579125
SYSTEMS AND METHODS FOR ADMISSION CONTROL INPUT/OUTPUT
2y 5m to grant Granted Mar 17, 2026
Patent 12578842
STRUCTURED SUGGESTIONS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+49.9%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 188 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month