Prosecution Insights
Last updated: April 19, 2026
Application No. 18/829,561

METHOD FOR GENERATING A KNOWLEDGE GRAPH FOR TRAFFIC MOTION PREDICTION, METHOD FOR TRAFFIC MOTION PREDICTIONS AND METHOD FOR CONTROLLING AN EGO-VEHICLE

Non-Final OA §101§103
Filed
Sep 10, 2024
Examiner
JIN, SELENA MENG
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Robert Bosch GmbH
OA Round
1 (Non-Final)
39%
Grant Probability
At Risk
1-2
OA Rounds
3y 7m
To Grant
72%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
45 granted / 116 resolved
-13.2% vs TC avg
Strong +33% interview lift
Without
With
+32.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
36 currently pending
Career history
152
Total Applications
across all art units

Statute-Specific Performance

§101
28.3%
-11.7% vs TC avg
§103
59.9%
+19.9% vs TC avg
§102
4.6%
-35.4% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 116 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statements (IDS) submitted on September 10th, 2024 and January 21st, 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. 101 Analysis – Step 1 Independent claims 1, 14, and 15 are directed to a method, computing unit, and non-transitory computer-readable storage medium, respectively, for generating a knowledge graph.. Therefore, claims 1, 14, and 15 are within at least one of the four statutory categories. 101 Analysis – Step 2A, Prong I Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the following groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. The other analogous independent claims, claims 14 and 15, are rejected for the same reasons as the representative claim 1 as discussed here. Claim 1 recites: A computer-implemented method for generating a knowledge graph for traffic motion prediction, comprising the following steps: receiving environment sensor data of at least one environment sensor of an ego-vehicle, wherein the environment sensor data represent an environment of the ego-vehicle and include information regarding at least one traffic participant located in the environment of the ego-vehicle; receiving map data from an electronic road map, wherein the map data represent a road network in the environment of the ego-vehicle and include information regarding at least one motion track the traffic participant is positioned on; extracting the information regarding the at least one traffic participant from the environment sensor data, and extracting the information regarding the motion track the traffic participant is positioned on from the map data; and generating a knowledge graph of the road network in the environment of the ego-vehicle including nodes and edges based on the map data and/or the environment sensor data, wherein the knowledge graph includes at least one node representing the traffic participant, and at least one node representing the motion track the traffic participant is positioned on. The examiner submits that the foregoing bolded limitation(s) constitute a “mental process” because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, the steps of extracting information and generating a knowledge graph the context of this claim encompasses a person looking at data collected (received, detected, etc.) and forming a simple judgement (determination, analysis, comparison, etc.) either mentally or using a pen and paper. Accordingly, the claim recites at least one abstract idea. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). 101 Analysis – Step 2A, Prong II Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.” In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”): A computer-implemented method for generating a knowledge graph for traffic motion prediction, comprising the following steps: receiving environment sensor data of at least one environment sensor of an ego-vehicle, wherein the environment sensor data represent an environment of the ego-vehicle and include information regarding at least one traffic participant located in the environment of the ego-vehicle; receiving map data from an electronic road map, wherein the map data represent a road network in the environment of the ego-vehicle and include information regarding at least one motion track the traffic participant is positioned on; extracting the information regarding the at least one traffic participant from the environment sensor data, and extracting the information regarding the motion track the traffic participant is positioned on from the map data; and generating a knowledge graph of the road network in the environment of the ego-vehicle including nodes and edges based on the map data and/or the environment sensor data, wherein the knowledge graph includes at least one node representing the traffic participant, and at least one node representing the motion track the traffic participant is positioned on. For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application. Regarding the additional limitations above, the examiner submits that these limitations are insignificant extra-solution activities that merely use a computer (processor) to perform the process. In particular, the steps of receiving sensor data and receiving map data are recited at a high level of generality (i.e. as a general means of receiving information for use in the information extraction and graph generation steps), and amounts to no more than mere data gathering necessary to perform the abstract idea, which is a form of insignificant extra-solution activity. Lastly, claims 1, 14, and 15 further recite a computing unit, a non-transitory computer-readable storage medium, and a data processor. These limitations merely describe how to generally “apply” the otherwise mental judgements in a generic or general purpose vehicle control environment. See Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. at 223 (“[T]he mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention.”). The device(s) and processor(s) are recited at a high level of generality and merely automates the steps. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application. Further, looking at the additional limitation(s) as an ordered combination or as a whole, the limitation(s) add nothing that is not already present when looking at the elements taken individually. For instance, there is no indication that the additional elements, when considered as a whole, reflect an improvement in the functioning of a computer or an improvement to another technology or technical field, apply or use the above-noted judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, implement/use the above-noted judicial exception with a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing, or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is not more than a drafting effort designed to monopolize the exception (MPEP § 2106.05). Accordingly, the additional limitation(s) do/does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. 101 Analysis – Step 2B Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a computing unit, storage medium, and a data processor to perform the steps amounts to nothing more than applying the exception using generic computer components. Generally applying an exception using a generic computer component cannot provide an inventive concept. And as discussed above, the additional limitations discussed above are insignificant extra-solution activities. The additional limitations of receiving information is considered to be well-understood, routine, and conventional in the art because the specification does not provide any indication that the environment sensor is anything other than a conventional vehicle sensor, nor that the computing unit or processor are anything other than a conventional computer. MPEP 2106.05(d)(II), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Dependent claims 2-13 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or additional elements that do not integrate the judicial exception into a practical application. Therefore, dependent claims 2-13 are not patent eligible under the same rationale as provided for in the rejection of claim 1. In order to expedite prosecution, Examiner also notes that the mere recitation of “executing at least one control function of the ego-vehicle” in claim 13 is not significant enough to integrate the judicial exception into a practical application since the claim does not include a positive recitation of what the control function does. The term “control function” is broad, and encompasses embodiments where the control function is directed to additional aspects of the abstract idea, or directed to insignificant extra-solution activity. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Therefore, claims 1-15 are ineligible under 35 USC §101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over US 11436504 B1, filed June 5th, 2019, hereinafter “Lukarski”, in view of US 20230230484 A1, filed January 16th, 2023, hereinafter “Al Faruque”. Regarding claim 1, Lukarski teaches A computer-implemented method for generating a knowledge graph for traffic motion prediction. See at least figures 1 and 10. comprising the following steps: receiving environment sensor data of at least one environment sensor of an ego-vehicle, wherein the environment sensor data represent an environment of the ego-vehicle and include information regarding at least one traffic participant located in the environment of the ego-vehicle. See at least col. 4 line 35 – col. 5 line 7, col. 8 line 43 – col. 9 line 22, col. 21 lines 1 – 18, figure 1 (block 112), and figure 10 (step 1001), wherein sensors of a vehicle collect sensor data regarding an environment of the ego vehicle 110. The sensor data includes data on static and dynamic objects in the environment surrounding the vehicle. See at least col. 13 lines 9 – 33, wherein dynamic objects include traffic participants such as cars, buses, trucks, motorcycles, trains, bicycles, pedestrians, etc. receiving map data from an electronic road map, wherein the map data represent a road network in the environment of the ego-vehicle and include information regarding at least one motion track the traffic participant is positioned on. See at least col. 4 line 35 – col. 5 line 7, col. 7 lines 48 – 64, col. 10 line 50 – col. 11 line 16, col. 12 line 22 – col. 13 line 8, col. 21 lines 1 – 18, figure 1 (blocks 114 and 175), figure 2, figure 3 (map(s) 374), and figure 10 (step 1001), wherein map data is received via communication devices from external sources. The map data represents static objects in the environment of the ego-vehicle such as lane segments, and includes information on static objects related to dynamic objects (traffic participants) in the environment. extracting the information regarding the at least one traffic participant from the environment sensor data. See at least col. 8 line 43 – col. 9 line 22 and figure 1 (block 165), wherein analyzers extract object identification information from the environment sensor data. and extracting the information regarding the motion track the traffic participant is positioned on from the map data. See at least col. 12 line 22 – col. 13 line 8, col. 19 line 22 – col. 20 line 67, and figure 9, wherein information on static objects associated with detected dynamic objects is obtained from map data 952. and generating a graph of the road network in the environment of the ego-vehicle including nodes and edges based on the map data and/or the environment sensor data. See at least col. 4 line 35 – col. 5 line 7, col. 21 lines 19 – 35, figure 1 (block 115), and figure 10 (step 1004), wherein a unified scene graph is created, representing the environment of the vehicle and including nodes and edges based on the received map and sensor data. wherein the graph includes at least one node representing the traffic participant, and at least one node representing the motion track the traffic participant is positioned on. See at least col. 15 line 28 – 67 and figure 6, wherein the graph 610 includes nodes P12 (pedestrian) and OV11 (other vehicle) representing traffic participants, and additionally includes nodes LS2 and LD8, representing the lane segments that the pedestrian and other vehicle, respectively, are located on. Lukarski teaches all of the limitations of claim 1 as discussed above, but remains silent as to the specifics of generating a knowledge graph. As discussed above, Lukarski discloses generating a scene graph representing the environment of the vehicle. Al Faruque teaches generating a knowledge graph. See at least [0005], [0022] and figure 3, wherein the generated scene graph is a knowledge graph that encodes knowledge about a visual scene. One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Lukarski with Al Faruque’s knowledge graph. It would have been obvious to modify because doing so enables models to assess the risk of driving maneuvers with better performance, as recognized by Al Faruque (see at least [0022] and [0027]-[0029]). Regarding claim 2, Lukarski and Al Faruque in combination disclose all of the limitations of claim 1 as discussed above, and Lukarski additionally teaches wherein the motion track the traffic participant is located on is at least one out of the following list: road, lane, intersection, underpass, bridge, motorway, motorway access, motorway exit, roundabout, parking bay, parking lot, bicycle lane, tramway track, pedestrian crossing, sidewalk. See at least col. 12 line 22 – col. 13 line 8 and figure 3, wherein the static objects include lane segments 302, overpasses 308, bridges 309, parking lots/areas 310, railroad crossing gates 313, crosswalks 304, curbs 206, etc. Additionally, see at least col. 10 line 32 – col. 11 line 16 and figure 2, wherein the map data contains additional information on lane segments including whether the segment is an exit ramp or an entrance ramp. Regarding claim 3, Lukarski and Al Faruque in combination disclose all of the limitations of claim 1 as discussed above, and Lukarski additionally teaches wherein the map data of the electronic road map include further information regarding further features of the motion track the traffic participant is located on, and wherein the at least one further feature of the motion track of the traffic participant is integrated into the knowledge graph via at least one further node. See at least col. 7 lines 48 – 64, col. 12 line 22 – col. 13 line 8 and figure 3, wherein the map data contains additional features regarding the roadways including traffic lights, signs, medians, buildings along the lanes, geographical landmarks, etc. These features are integrated into the scene graph as additional static object nodes. Regarding claim 4, Lukarski and Al Faruque in combination disclose all of the limitations of claim 1 as discussed above, and Lukarski additionally teaches wherein the environment sensor data further include further information regarding at least one further feature of the traffic participant, and wherein the at least one further feature of the traffic participant is integrated into the knowledge graph via at least one further node. See at least col. 12 line 22 – col. 13 line 33 and figure 3, wherein the sensor data is used by classifier 345 to obtain information regarding the type of objects detected. The dynamic objects 380 include sub-categories such as cars, buses, trucks, motorcycles, trains, bicycles, etc., and these are integrated into the scene graph as additional dynamic object nodes. Regarding claim 5, Lukarski and Al Faruque in combination disclose all of the limitations of claim 3 as discussed above, and Lukarski additionally teaches wherein the nodes and further node of the knowledge graph are organized in classes and sub-classes. See at least col. 12 line 22 – col. 13 line 33 and figure 3, wherein a hierarchy of sub-categories are defined for the categories. For example, the category of other vehicles 373 is organized into sub-categories including cars, buses, trucks, motorcycles, trains, bicycles, etc. Regarding claim 6, Lukarski and Al Faruque in combination disclose all of the limitations of claim 3 as discussed above, and Lukarski additionally teaches wherein the further features of the motion track are at least one of the following list including: road geometries, lane geometries, lane dividers, lane boundaries, lane connectors, intersections, stop areas, traffic signals, traffic signs, traffic regulations, road conditions, slope values, pedestrian crossings, car park areas, road segments, road blocks. See at least col. 7 lines 48 – 64, col. 12 line 22 – col. 13 line 8 and figure 3, wherein the map data contains additional features regarding the roadways including medians, traffic lights, signs, crosswalks, parking areas, buildings along the lanes, geographical landmarks, etc. Regarding claim 7, Lukarski and Al Faruque in combination disclose all of the limitations of claim 1 as discussed above, and Lukarski additionally teaches wherein the further features of the traffic participant are at least one of the following list including: i) static object, ii) moving object, iii) human, iv) animal, v) vehicle, vi) car, vii) truck, viii) tram, ix) motorcycle, x) bicycle, xi) barrier, xii) traffic cone, xiii) a relative position to at least one further traffic participant and/or to the ego-vehicle. See at least col. 12 line 22 – col. 13 line 33 and figure 3, wherein the sensor data is used by classifier 345 to obtain information regarding the type of objects detected. The dynamic objects 380 include sub-categories such as cars, buses, trucks, motorcycles, trains, bicycles, pedestrians, animals, etc., and these are integrated into the scene graph as additional dynamic object nodes. Regarding claim 8, Lukarski and Al Faruque in combination disclose all of the limitations of claim 1 as discussed above, and Lukarski additionally teaches determining a time stamp of the environment sensor data and of the map data. See at least col. 5 lines 8 – 38, wherein the information from the data sources used to obtain node information is timestamped. wherein a scene includes the information of the environment sensor data and the map data for one time stamp, and wherein a sequence is a series of successive scenes, and organizing the nodes of the knowledge graph with respect to the scenes and sequences of the environment sensor data and the map data. See at least col. 5 lines 8 – 38, wherein the graph is organized into scenes, by including multiple nodes representing objects at particular time stamps. Edges are formed in the graph to indicate succession. For example, a tracked object in the scene at time T1 is represented by node N1, and the same object in the scene at time T2 is represented by node N2. The two nodes form the sequence N1 – μl – N2, wherein μl is used to represent the dynamics of the scene between T1 and T2. Lukarski remains silent on organizing the environment sensor data and the map data in scenes and sequences. As discussed above, Lukarski teaches organizing the nodes generated from sensor/map data into scenes and sequences, rather than the sensor data and the map data itself. Al Faruque teaches organizing the environment sensor data and the map data in scenes and sequences. See at least [0118] and figure 7, wherein the sensor data is organized into sequences (video clips) of successive scenes (image frame) during sensor data preprocessing. One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Lukarski with Al Faruque’s technique of preprocessing input datasets by organizing them into scenes and sequences. It would have been obvious to modify because doing so enables models to assess the risk of driving maneuvers with better performance, as recognized by Al Faruque (see at least [0022] and [0027]-[0029]). Regarding claim 9, Lukarski and Al Faruque in combination disclose all of the limitations of claim 1 as discussed above, and Lukarski additionally teaches wherein the information of the map data considered in generating the knowledge graph is limited to an area of a possible path of the traffic participant. See at least col. 19 lines 22 – 50 and figure 9, wherein the graph generation only uses map information 952 from the map representing the expected vicinity of the ego vehicle during its journey. Regarding claim 10, Lukarski and Al Faruque in combination disclose all of the limitations of claim 1 as discussed above, and Lukarski additionally teaches wherein the method is executed during a driving operation of the ego-vehicle. See at least col. 22 lines 4 – 15 and figure 10 (step 1013), wherein the steps 1001-1013 are performed iteratively as the vehicle is being controlled. Regarding claim 11, Lukarski and Al Faruque in combination disclose all of the limitations of claim 1 as discussed above, and Lukarski additionally teaches predicting, by a motion prediction module, a future motion of at least one traffic participant positioned in the environment of an ego-vehicle based on the knowledge graph. See at least col. 7 lines 10 – 47, col. 8 lines 25 – 42, col. 21 line 45 – col. 22 line 3, and figure 10 (step 1010), wherein the generated scene graph is used to plan the ego vehicle’s trajectory, by predicting movements of the detected dynamic objects near the ego vehicle. Regarding claim 12, Lukarski and Al Faruque in combination disclose all of the limitations of claim 11 as discussed above, and Lukarski additionally teaches wherein the motion prediction module includes a trained artificial intelligence capable of predicting the motion of the traffic participant based on information of the knowledge graph. See at least See at least col. 7 lines 10 – 47, col. 8 lines 25 – 42, col. 21 line 45 – col. 22 line 3, and figure 10 (step 1010), wherein the predictions of future states of the vehicle’s environment are performed by trained reasoning algorithms/models. See at least col. 4 lines 16 – 34 and figure 1, wherein the reasoning algorithms/models used to plan the vehicle’s motion/behavior are machine learning models. Regarding claim 13, Lukarski and Al Faruque in combination disclose all of the limitations of claim 11 as discussed above, and Lukarski additionally teaches further comprising: executing at least one control function of the ego-vehicle based on the predicted motion of the at least one traffic participant. See at least col. 7 lines 10 – 47, col. 8 lines 25 – 42, col. 21 line 45 – col. 22 line 15, and figure 10 (step 1010), wherein the generated scene graph is used to plan the ego vehicle’s trajectory, by predicting movements of the detected dynamic objects near the ego vehicle. The vehicle’s braking/steering/acceleration subsystems are then controlled by a motion control directive based on the planned trajectory. Regarding claim 14, Lukarski teaches A computing unit configured to generate a knowledge graph for traffic motion prediction. See at least figures 1 and 10. Additionally, see at least figure 11 (computing device 9000). the computing unit configured to: receive environment sensor data of at least one environment sensor of an ego-vehicle, wherein the environment sensor data represent an environment of the ego-vehicle and include information regarding at least one traffic participant located in the environment of the ego-vehicle. See at least col. 4 line 35 – col. 5 line 7, col. 8 line 43 – col. 9 line 22, col. 21 lines 1 – 18, figure 1 (block 112), and figure 10 (step 1001), wherein sensors of a vehicle collect sensor data regarding an environment of the ego vehicle 110. The sensor data includes data on static and dynamic objects in the environment surrounding the vehicle. See at least col. 13 lines 9 – 33, wherein dynamic objects include traffic participants such as cars, buses, trucks, motorcycles, trains, bicycles, pedestrians, etc. receive map data from an electronic road map, wherein the map data represent a road network in the environment of the ego-vehicle and include information regarding at least one motion track the traffic participant is positioned on. See at least col. 4 line 35 – col. 5 line 7, col. 7 lines 48 – 64, col. 10 line 50 – col. 11 line 16, col. 12 line 22 – col. 13 line 8, col. 21 lines 1 – 18, figure 1 (blocks 114 and 175), figure 2, figure 3 (map(s) 374), and figure 10 (step 1001), wherein map data is received via communication devices from external sources. The map data represents static objects in the environment of the ego-vehicle such as lane segments, and includes information on static objects related to dynamic objects (traffic participants) in the environment. extract the information regarding the at least one traffic participant from the environment sensor data. See at least col. 8 line 43 – col. 9 line 22 and figure 1 (block 165), wherein analyzers extract object identification information from the environment sensor data. and extract the information regarding the motion track the traffic participant is positioned on from the map data. See at least col. 12 line 22 – col. 13 line 8, col. 19 line 22 – col. 20 line 67, and figure 9, wherein information on static objects associated with detected dynamic objects is obtained from map data 952. and generate a graph of the road network in the environment of the ego-vehicle including nodes and edges based on the map data and/or the environment sensor data. See at least col. 4 line 35 – col. 5 line 7, col. 21 lines 19 – 35, figure 1 (block 115), and figure 10 (step 1004), wherein a unified scene graph is created, representing the environment of the vehicle and including nodes and edges based on the received map and sensor data. wherein the graph includes at least one node representing the traffic participant, and at least one node representing the motion track the traffic participant is positioned on. See at least col. 15 line 28 – 67 and figure 6, wherein the graph 610 includes nodes P12 (pedestrian) and OV11 (other vehicle) representing traffic participants, and additionally includes nodes LS2 and LD8, representing the lane segments that the pedestrian and other vehicle, respectively, are located on. Lukarski teaches all of the limitations of claim 1 as discussed above, but remains silent as to the specifics of generating a knowledge graph. As discussed above, Lukarski discloses generating a scene graph representing the environment of the vehicle. Al Faruque teaches generating a knowledge graph. See at least [0005], [0022] and figure 3, wherein the generated scene graph is a knowledge graph that encodes knowledge about a visual scene. One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Lukarski with Al Faruque’s knowledge graph. It would have been obvious to modify because doing so enables models to assess the risk of driving maneuvers with better performance, as recognized by Al Faruque (see at least [0022] and [0027]-[0029]). Regarding claim 1, Lukarski teaches A non-transitory computer-readable storage medium on which is stored a computer program including instructions for generating a knowledge graph for traffic motion prediction. See at least figures 1 and 10. Additionally, see at least figure 11 (main memory 9020). the instructions, when executed by a data processor, causing the data processor to perform the following steps: receiving environment sensor data of at least one environment sensor of an ego-vehicle, wherein the environment sensor data represent an environment of the ego-vehicle and include information regarding at least one traffic participant located in the environment of the ego-vehicle. See at least col. 4 line 35 – col. 5 line 7, col. 8 line 43 – col. 9 line 22, col. 21 lines 1 – 18, figure 1 (block 112), and figure 10 (step 1001), wherein sensors of a vehicle collect sensor data regarding an environment of the ego vehicle 110. The sensor data includes data on static and dynamic objects in the environment surrounding the vehicle. See at least col. 13 lines 9 – 33, wherein dynamic objects include traffic participants such as cars, buses, trucks, motorcycles, trains, bicycles, pedestrians, etc. receiving map data from an electronic road map, wherein the map data represent a road network in the environment of the ego-vehicle and include information regarding at least one motion track the traffic participant is positioned on. See at least col. 4 line 35 – col. 5 line 7, col. 7 lines 48 – 64, col. 10 line 50 – col. 11 line 16, col. 12 line 22 – col. 13 line 8, col. 21 lines 1 – 18, figure 1 (blocks 114 and 175), figure 2, figure 3 (map(s) 374), and figure 10 (step 1001), wherein map data is received via communication devices from external sources. The map data represents static objects in the environment of the ego-vehicle such as lane segments, and includes information on static objects related to dynamic objects (traffic participants) in the environment. extracting the information regarding the at least one traffic participant from the environment sensor data. See at least col. 8 line 43 – col. 9 line 22 and figure 1 (block 165), wherein analyzers extract object identification information from the environment sensor data. and extracting the information regarding the motion track the traffic participant is positioned on from the map data. See at least col. 12 line 22 – col. 13 line 8, col. 19 line 22 – col. 20 line 67, and figure 9, wherein information on static objects associated with detected dynamic objects is obtained from map data 952. and generating a graph of the road network in the environment of the ego-vehicle including nodes and edges based on the map data and/or the environment sensor data. See at least col. 4 line 35 – col. 5 line 7, col. 21 lines 19 – 35, figure 1 (block 115), and figure 10 (step 1004), wherein a unified scene graph is created, representing the environment of the vehicle and including nodes and edges based on the received map and sensor data. wherein the graph includes at least one node representing the traffic participant, and at least one node representing the motion track the traffic participant is positioned on. See at least col. 15 line 28 – 67 and figure 6, wherein the graph 610 includes nodes P12 (pedestrian) and OV11 (other vehicle) representing traffic participants, and additionally includes nodes LS2 and LD8, representing the lane segments that the pedestrian and other vehicle, respectively, are located on. Lukarski teaches all of the limitations of claim 1 as discussed above, but remains silent as to the specifics of generating a knowledge graph. As discussed above, Lukarski discloses generating a scene graph representing the environment of the vehicle. Al Faruque teaches generating a knowledge graph. See at least [0005], [0022] and figure 3, wherein the generated scene graph is a knowledge graph that encodes knowledge about a visual scene. One having ordinary skill in the art, before the effective filing date of the claimed invention, would have found it obvious to modify Lukarski with Al Faruque’s knowledge graph. It would have been obvious to modify because doing so enables models to assess the risk of driving maneuvers with better performance, as recognized by Al Faruque (see at least [0022] and [0027]-[0029]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Selena M. Jin whose telephone number is (408)918-7588. The examiner can normally be reached Monday - Thursday and alternate Fridays, 7:30-4:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Faris Almatrahi can be reached at (313) 446-4821. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.M.J./Examiner, Art Unit 3667 /FARIS S ALMATRAHI/Supervisory Patent Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Sep 10, 2024
Application Filed
Dec 19, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594988
Method of Braking Automated Guided Vehicle, and Automated Guided Vehicle
2y 5m to grant Granted Apr 07, 2026
Patent 12553728
VEHICLE EFFICIENCY PREDICTION AND CONTROL
2y 5m to grant Granted Feb 17, 2026
Patent 12530697
DENIAL OF SERVICE SYSTEMS AND METHODS
2y 5m to grant Granted Jan 20, 2026
Patent 12448745
SELECTIVE ELECTROMAGNETIC DEVICE FOR VEHICLES
2y 5m to grant Granted Oct 21, 2025
Patent 12441333
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND SERVER
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
39%
Grant Probability
72%
With Interview (+32.8%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 116 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month