Prosecution Insights
Last updated: April 18, 2026
Application No. 18/506,352

SYSTEMS AND METHODS FOR ESTIMATING A TRAFFIC PATTERN FROM SPARSE DATA

Final Rejection §103
Filed
Nov 10, 2023
Examiner
ALGEHAIM, MOHAMED A
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Denso International America Inc.
OA Round
2 (Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
81%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
122 granted / 207 resolved
+6.9% vs TC avg
Strong +22% interview lift
Without
With
+21.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
244
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
15.6%
-24.4% vs TC avg
§112
15.3%
-24.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 207 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 of U.S. Application No. 18/506352 filed on 12/29/2025 have been examined. Office Action is in response to the Applicant's amendments and remarks filed12/29/2025. Claims 1, 3, 7, 9-10, 12, 14, 18, & 20 are presently amended. Claims 1-20 are presently pending and are presented for examination. Response to Arguments In regards to the previous rejections under 35 U.S.C. § 101: the amendments to the claims overcome the previous 35 USC § 101 rejection. Therefore, the previous 35 USC § 101 rejection is withdrawn. In regards to the previous rejection under 35 U.S.C. § 102: Applicant’s arguments with respect to the independent claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. A new grounds of rejection is made in view of US 2019/0180612A1 (“Demiryurek”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 6, 10, 12, & 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0068123A1 (“Guo”), in view of US 2019/0180612A1 (“Demiryurek”). As per claim 1 Guo discloses An estimation system (see at least Guo, para. [0023]), comprising: a memory storing instructions that, when executed by a processor (see at least Guo, para. [0023]: The traffic system 200 is shown as including a processor 205. In one embodiment, the traffic system 200 includes a memory 210 that stores an aggregation module 220 and a graphing module 230.), cause the processor to: form multi-channel data from partial data and timing data about traffic on a road, and a vehicle and road sensors capture and output the multi-channel data, the partial data is acquired from the road sensors (see at least Guo, para. [0027-0028]: Thus, in one embodiment, the data store 240 stores data used by the modules220 and 230 in executing various functions. In one embodiment, the data store 240 includes the sensor data 250 along with, for example, metadata that characterize various aspects of the sensor data 250. For example, the metadata can include location coordinates (e.g., longitude and latitude), relative map coordinates or tile identifiers, time/date stamps from when the separate sensor data 250 was generated, and so on….In one embodiment, the data store 240 may also include the graph structure 260, the completion target 270, and the graph model 280. The traffic system 200 may generate the graph structure 260 according to aggregating structured sensor and perception data from information sources to a server. A graph structure 260 may be a graph that illustrates traffic intersections and vehicle flows in a geographic area. & para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with a k-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on. & para. [0056]); generate an adjacency matrix from the partial data and geometry about the road (see at least Guo, para. [0043]: The neural network model 510 may use matrices G, A, and E to form a clean and complete graph model of the traffic flows associated with a road network. The neural network model 510 may use a graph structure denoted as a matrix G with the dimensions N×k. The variable N may be the total number of vertices and k the feature vector associated with each vertex. The variable k-dimension may represent reported traffic information from SRVs such as speed, flow rates, density, and so on. A connection relation, such as between vertices, may be represented by an adjacent matrix A with the dimension N×N.); and train a graph model using temporal patterns for graph nodes from the multi-channel data and the adjacency matrix to complete the partial data, output a traffic pattern (see at least Guo, para. [0048]: In one approach, a traffic system may train the neural network model 510 using a completed, cleaned, or corrected ground-truth G, such as for supervised learning. A module may train the neural network model 510 using perception data. By minimizing the error |Ġ-Ĝ| and back-propagating the derivate, the parameterized encoder-decoder network updates their weights to reach a stable point. The neural network model 510 may use the learned parameters or weights for mapping the data to filter out noise values and fill-in or correct missing values to structure, complete, or clean graphed perception data. In this way, the neural network model 510 may generate the reconstruction matrix Ĝ with satisfactory confidence levels for the completed and cleaned data values when inferred with the noisy and incomplete matrix G.), and execute an automated task by the vehicle (see at least Guo, para. [0024]: The traffic system 200 may use the perception data to complete a graph model of the traffic flows and communicate the graph model to the vehicle100, thereby improving traffic, congestion, navigation, automated driving maneuvers, automated motion plans, and so on. & para. [0055-0057]). However Guo does not explicitly disclose form multi-channel data from partial data and signal phase and timing data (SP AT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information. Demiryurek teaches form multi-channel data from partial data and signal phase and timing data (SPAT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information (see at least Demiryurek, para. [0122-0125]: In particular, relatively large scale high-resolution (both spatial and temporal) traffic sensors (loop detectors) were used to collect a data set from highways and arterial streets in Los Angeles County. The dataset includes both inventory and real-time data for 15,000 traffic sensors covering approximately 3420 miles…Sensor data between March 2014 and April 2014 was chosen for experimentation. This sensor data includes more than 60 million records of readings. The Los Angeles road network used in the experiment was obtained from HERE Map dataset. Two subgraphs were created of the Los Angeles road network, including a SMALL network and a LARGE network. The SMALL network contains 5984 vertices and 12,538 edges. 1642 sensors were mapped to the SMALL network. The LARGE network contains 8242 vertices and 19,986 edges. 4048 sensors were mapped to the LARGE network. FIG. 8illustrates a low definition representation of sensor locations 802 and road network segments 804. After mapping the sensor data, two months of network snapshots were obtained for both the SMALL and the LARGE networks….For edge traffic prediction, results are compared with LSM-RN-Naïve, in which the formulations from LSM-SN were adapted by combining the topology and temporal correlations. Additionally, LSM-RN-Naïve uses a Naïve incremental learning strategy which independently learns the latent attributes of each timestamp first, then learns the transition matrix. The algorithms are also compared with two representative timeseries prediction methods: a linear model (i.e., ARIMA) and a nonlinear model (i.e., SVR). Each model was trained independently for each timeseries using historical data. In addition, because the methods may be negatively affected due to missing values during the prediction stages (i.e., some of the input readings for ARIMA and SVR may be 0), ARIMA-Sp and SVR-Sp were considered. ARIMA-Sp and SVR-Sp use completed readings from the global learning algorithm to provide a fair comparison. The Tensor method was also implemented, however, this method cannot address the sparsity problem of the dataset and thus produces meaningless results (i.e., most of the prediction values are relatively close to 0)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of form multi-channel data from partial data and signal phase and timing data (SPAT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information of Demiryurek, with a reasonable expectation of success, in order for accurate and relatively fast prediction of future traffic information (see at least Demiryurek, para. [0005]). As per claim 6 Guo discloses further including instructions to derive the adjacency matrix using a relationship between the graph nodes that factors any one of a travel distance, a road layout, and a time-series correlation (see at least Guo, para. [0042-0043]: A neural network model may improve graphing or mapping data by defining constraints for de-noising. A constraint may limit, reduce, or refine a prediction space for data thereby improving prediction accuracy and speed. For example, the edge information may include details of driving conditions, road constraints, map constraints, traffic incidents, and so on. The edge information may also specify the length between two adjacent nodes as a travel time constraint, a road curvature as a maneuvering constraint, a number of lanes as a road capacity constraint, and so on.). As per claim 10 Guo discloses A non-transitory computer-readable medium comprising (see at least Guo, para. [0023]): instructions that, when executed by a processor (see at least Guo, para. [0023]: The traffic system 200 is shown as including a processor 205. In one embodiment, the traffic system 200 includes a memory 210 that stores an aggregation module 220 and a graphing module 230.), cause the processor to: form multi-channel data from partial data and timing data about traffic on a road, and a vehicle and road sensors capture and output the multi-channel data, the partial data is acquired from the road sensors (see at least Guo, para. [0027-0028]: Thus, in one embodiment, the data store 240 stores data used by the modules220 and 230 in executing various functions. In one embodiment, the data store 240 includes the sensor data 250 along with, for example, metadata that characterize various aspects of the sensor data 250. For example, the metadata can include location coordinates (e.g., longitude and latitude), relative map coordinates or tile identifiers, time/date stamps from when the separate sensor data 250 was generated, and so on….In one embodiment, the data store 240 may also include the graph structure 260, the completion target 270, and the graph model 280. The traffic system 200 may generate the graph structure 260 according to aggregating structured sensor and perception data from information sources to a server. A graph structure 260 may be a graph that illustrates traffic intersections and vehicle flows in a geographic area. & para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with a k-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on. & para. [0056]); generate an adjacency matrix from the partial data and geometry about the road (see at least Guo, para. [0043]: The neural network model 510 may use matrices G, A, and E to form a clean and complete graph model of the traffic flows associated with a road network. The neural network model 510 may use a graph structure denoted as a matrix G with the dimensions N×k. The variable N may be the total number of vertices and k the feature vector associated with each vertex. The variable k-dimension may represent reported traffic information from SRVs such as speed, flow rates, density, and so on. A connection relation, such as between vertices, may be represented by an adjacent matrix A with the dimension N×N.); and train a graph model using temporal patterns for graph nodes from the multi-channel data and the adjacency matrix to complete the partial data, output a traffic pattern (see at least Guo, para. [0048]: In one approach, a traffic system may train the neural network model 510 using a completed, cleaned, or corrected ground-truth G, such as for supervised learning. A module may train the neural network model 510 using perception data. By minimizing the error |Ġ-Ĝ| and back-propagating the derivate, the parameterized encoder-decoder network updates their weights to reach a stable point. The neural network model 510 may use the learned parameters or weights for mapping the data to filter out noise values and fill-in or correct missing values to structure, complete, or clean graphed perception data. In this way, the neural network model 510 may generate the reconstruction matrix Ĝ with satisfactory confidence levels for the completed and cleaned data values when inferred with the noisy and incomplete matrix G.), and execute an automated task by the vehicle (see at least Guo, para. [0024]: The traffic system 200 may use the perception data to complete a graph model of the traffic flows and communicate the graph model to the vehicle100, thereby improving traffic, congestion, navigation, automated driving maneuvers, automated motion plans, and so on. & para. [0055-0057]). However Guo does not explicitly disclose form multi-channel data from partial data and signal phase and timing data (SP AT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information. Demiryurek teaches form multi-channel data from partial data and signal phase and timing data (SPAT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information (see at least Demiryurek, para. [0122-0125]: In particular, relatively large scale high-resolution (both spatial and temporal) traffic sensors (loop detectors) were used to collect a data set from highways and arterial streets in Los Angeles County. The dataset includes both inventory and real-time data for 15,000 traffic sensors covering approximately 3420 miles…Sensor data between March 2014 and April 2014 was chosen for experimentation. This sensor data includes more than 60 million records of readings. The Los Angeles road network used in the experiment was obtained from HERE Map dataset. Two subgraphs were created of the Los Angeles road network, including a SMALL network and a LARGE network. The SMALL network contains 5984 vertices and 12,538 edges. 1642 sensors were mapped to the SMALL network. The LARGE network contains 8242 vertices and 19,986 edges. 4048 sensors were mapped to the LARGE network. FIG. 8illustrates a low definition representation of sensor locations 802 and road network segments 804. After mapping the sensor data, two months of network snapshots were obtained for both the SMALL and the LARGE networks….For edge traffic prediction, results are compared with LSM-RN-Naïve, in which the formulations from LSM-SN were adapted by combining the topology and temporal correlations. Additionally, LSM-RN-Naïve uses a Naïve incremental learning strategy which independently learns the latent attributes of each timestamp first, then learns the transition matrix. The algorithms are also compared with two representative timeseries prediction methods: a linear model (i.e., ARIMA) and a nonlinear model (i.e., SVR). Each model was trained independently for each timeseries using historical data. In addition, because the methods may be negatively affected due to missing values during the prediction stages (i.e., some of the input readings for ARIMA and SVR may be 0), ARIMA-Sp and SVR-Sp were considered. ARIMA-Sp and SVR-Sp use completed readings from the global learning algorithm to provide a fair comparison. The Tensor method was also implemented, however, this method cannot address the sparsity problem of the dataset and thus produces meaningless results (i.e., most of the prediction values are relatively close to 0)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of form multi-channel data from partial data and signal phase and timing data (SPAT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information of Demiryurek, with a reasonable expectation of success, in order for accurate and relatively fast prediction of future traffic information (see at least Demiryurek, para. [0005]). As per claim 12 Guo discloses A method comprising: forming multi-channel data from partial data and timing data about traffic on a road, and a vehicle and road sensors capture and output the multi-channel data, the partial data is acquired from the road sensors (see at least Guo, para. [0027-0028]: Thus, in one embodiment, the data store 240 stores data used by the modules220 and 230 in executing various functions. In one embodiment, the data store 240 includes the sensor data 250 along with, for example, metadata that characterize various aspects of the sensor data 250. For example, the metadata can include location coordinates (e.g., longitude and latitude), relative map coordinates or tile identifiers, time/date stamps from when the separate sensor data 250 was generated, and so on….In one embodiment, the data store 240 may also include the graph structure 260, the completion target 270, and the graph model 280. The traffic system 200 may generate the graph structure 260 according to aggregating structured sensor and perception data from information sources to a server. A graph structure 260 may be a graph that illustrates traffic intersections and vehicle flows in a geographic area. & para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with a k-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on. & para. [0056]); generating an adjacency matrix from the partial data and geometry about the road (see at least Guo, para. [0043]: The neural network model 510 may use matrices G, A, and E to form a clean and complete graph model of the traffic flows associated with a road network. The neural network model 510 may use a graph structure denoted as a matrix G with the dimensions N×k. The variable N may be the total number of vertices and k the feature vector associated with each vertex. The variable k-dimension may represent reported traffic information from SRVs such as speed, flow rates, density, and so on. A connection relation, such as between vertices, may be represented by an adjacent matrix A with the dimension N×N.); and training a graph model using temporal patterns for graph nodes from the multi-channel data and the adjacency matrix to complete the partial data, output a traffic pattern (see at least Guo, para. [0048]: In one approach, a traffic system may train the neural network model 510 using a completed, cleaned, or corrected ground-truth G, such as for supervised learning. A module may train the neural network model 510 using perception data. By minimizing the error |Ġ-Ĝ| and back-propagating the derivate, the parameterized encoder-decoder network updates their weights to reach a stable point. The neural network model 510 may use the learned parameters or weights for mapping the data to filter out noise values and fill-in or correct missing values to structure, complete, or clean graphed perception data. In this way, the neural network model 510 may generate the reconstruction matrix Ĝ with satisfactory confidence levels for the completed and cleaned data values when inferred with the noisy and incomplete matrix G.), and executing an automated task by the vehicle (see at least Guo, para. [0024]: The traffic system 200 may use the perception data to complete a graph model of the traffic flows and communicate the graph model to the vehicle100, thereby improving traffic, congestion, navigation, automated driving maneuvers, automated motion plans, and so on. & para. [0055-0057]). However Guo does not explicitly disclose forming multi-channel data from partial data and signal phase and timing data (SPAT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information. Demiryurek teaches forming multi-channel data from partial data and signal phase and timing data (SPAT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information (see at least Demiryurek, para. [0122-0125]: In particular, relatively large scale high-resolution (both spatial and temporal) traffic sensors (loop detectors) were used to collect a data set from highways and arterial streets in Los Angeles County. The dataset includes both inventory and real-time data for 15,000 traffic sensors covering approximately 3420 miles…Sensor data between March 2014 and April 2014 was chosen for experimentation. This sensor data includes more than 60 million records of readings. The Los Angeles road network used in the experiment was obtained from HERE Map dataset. Two subgraphs were created of the Los Angeles road network, including a SMALL network and a LARGE network. The SMALL network contains 5984 vertices and 12,538 edges. 1642 sensors were mapped to the SMALL network. The LARGE network contains 8242 vertices and 19,986 edges. 4048 sensors were mapped to the LARGE network. FIG. 8illustrates a low definition representation of sensor locations 802 and road network segments 804. After mapping the sensor data, two months of network snapshots were obtained for both the SMALL and the LARGE networks….For edge traffic prediction, results are compared with LSM-RN-Naïve, in which the formulations from LSM-SN were adapted by combining the topology and temporal correlations. Additionally, LSM-RN-Naïve uses a Naïve incremental learning strategy which independently learns the latent attributes of each timestamp first, then learns the transition matrix. The algorithms are also compared with two representative timeseries prediction methods: a linear model (i.e., ARIMA) and a nonlinear model (i.e., SVR). Each model was trained independently for each timeseries using historical data. In addition, because the methods may be negatively affected due to missing values during the prediction stages (i.e., some of the input readings for ARIMA and SVR may be 0), ARIMA-Sp and SVR-Sp were considered. ARIMA-Sp and SVR-Sp use completed readings from the global learning algorithm to provide a fair comparison. The Tensor method was also implemented, however, this method cannot address the sparsity problem of the dataset and thus produces meaningless results (i.e., most of the prediction values are relatively close to 0)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of forming multi-channel data from partial data and signal phase and timing data (SPAT) information about traffic on a road, and channels from the multi-channel data have a first layer representing the partial data and a second layer representing the SPAT information of Demiryurek, with a reasonable expectation of success, in order for accurate and relatively fast prediction of future traffic information (see at least Demiryurek, para. [0005]). As per claim 17 Guo discloses further comprising deriving the adjacency matrix using a relationship between the graph nodes that factors any one of a travel distance, a road layout, and a time-series correlation (see at least Guo, para. [0042-0043]: A neural network model may improve graphing or mapping data by defining constraints for de-noising. A constraint may limit, reduce, or refine a prediction space for data thereby improving prediction accuracy and speed. For example, the edge information may include details of driving conditions, road constraints, map constraints, traffic incidents, and so on. The edge information may also specify the length between two adjacent nodes as a travel time constraint, a road curvature as a maneuvering constraint, a number of lanes as a road capacity constraint, and so on.). Claim(s) 2, 11, & 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo, in view of Demiryurek, in view of US 2021/0064999A1 (“Liu 999` ”). As per claim 2 Guo discloses the graph nodes represent spatial locations of the road sensors between lane bounds associated with the road (see at least Guo, para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with ak-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on.). However Guo does not explicitly disclose to estimate the temporal patterns by a learning model from time-series data associated with the graph nodes individually Liu 999` teaches to estimate the temporal patterns by a learning model from time-series data associated with the graph nodes individually (see at least Liu 999` , para. [003-0033]: After K layers of the MG-MGCN, the vectors XKs,r can be fed to a fully-connected layer to obtain spatial feature vectors of IV) sites, xs,r E JR I v,ixD,, which encodes both the fine-grained and coarse-grained multi-graph spatial correlations. Then xs,r, at multiple time steps, is used for subsequent temporal modeling. This processing can be done in parallel for spatial correlations at fine granularity ( e.g., site-level) and at coarse granularity (e.g., region-level), with separate MG-MGCN networks handling each granularity output….Block 204 performs temporal modeling, where complex temporal correlations also exist in different time steps. For example, the traffic volume of previous weeks, previous days, and previous hours can all affect the traffic volume of an upcoming time slot. To predict the traffic volume accurately, the correlations between previous time steps are discovered and utilized. ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of to estimate the temporal patterns by a learning model from time-series data associated with the graph nodes individually of Liu 999`, with a reasonable expectation of success, in order to enhance the site-level traffic volume prediction (see at least Liu 999`, para. [0016]). As per claim 11 Guo discloses the graph nodes represent spatial locations of the road sensors between lane bounds associated with the road (see at least Guo, para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with ak-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on.). However Guo does not explicitly disclose further including instructions to estimate the temporal patterns by a learning model from time-series data associated with the graph nodes individually. Liu 999` teaches further including instructions to estimate the temporal patterns by a learning model from time-series data associated with the graph nodes individually (see at least Liu 999` , para. [003-0033]: After K layers of the MG-MGCN, the vectors XKs,r can be fed to a fully-connected layer to obtain spatial feature vectors of IV) sites, xs,r E JR I v,ixD,, which encodes both the fine-grained and coarse-grained multi-graph spatial correlations. Then xs,r, at multiple time steps, is used for subsequent temporal modeling. This processing can be done in parallel for spatial correlations at fine granularity ( e.g., site-level) and at coarse granularity (e.g., region-level), with separate MG-MGCN networks handling each granularity output….Block 204 performs temporal modeling, where complex temporal correlations also exist in different time steps. For example, the traffic volume of previous weeks, previous days, and previous hours can all affect the traffic volume of an upcoming time slot. To predict the traffic volume accurately, the correlations between previous time steps are discovered and utilized.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of to estimate the temporal patterns by a learning model from time-series data associated with the graph nodes individually of Liu 999`, with a reasonable expectation of success, in order to enhance the site-level traffic volume prediction (see at least Liu 999`, para. [0016]). As per claim 13 Guo discloses the graph nodes represent spatial locations of the road sensors between lane bounds associated with the road (see at least Guo, para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with ak-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on.). However Guo does not explicitly disclose further comprising estimating the temporal patterns by a learning model from time-series data associated with the graph nodes individually. Liu 999` teaches further comprising estimating the temporal patterns by a learning model from time-series data associated with the graph nodes individually (see at least Liu 999` , para. [003-0033]: After K layers of the MG-MGCN, the vectors XKs,r can be fed to a fully-connected layer to obtain spatial feature vectors of IV) sites, xs,r E JR I v,ixD,, which encodes both the fine-grained and coarse-grained multi-graph spatial correlations. Then xs,r, at multiple time steps, is used for subsequent temporal modeling. This processing can be done in parallel for spatial correlations at fine granularity ( e.g., site-level) and at coarse granularity (e.g., region-level), with separate MG-MGCN networks handling each granularity output….Block 204 performs temporal modeling, where complex temporal correlations also exist in different time steps. For example, the traffic volume of previous weeks, previous days, and previous hours can all affect the traffic volume of an upcoming time slot. To predict the traffic volume accurately, the correlations between previous time steps are discovered and utilized.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of to estimate the temporal patterns by a learning model from time-series data associated with the graph nodes individually of Liu 999`, with a reasonable expectation of success, in order to enhance the site-level traffic volume prediction (see at least Liu 999`, para. [0016]). Claim(s) 3 & 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo, in view of Demiryurek, in view of Liu 999`, in view of US 2016/0284212A1 (“Tatourian”). As per claim 3 Guo discloses further including instructions to: transform the data point into a map having the graph nodes that includes bound nodes for the road (see at least Guo, para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with ak-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on.). However Guo does not explicitly disclose group the road sensors of a lane for the road into a data point, and the data point having complete data, incomplete data, and the partial data, and the road sensors include devices that are malfunctioning on the vehicle and an infrastructure equipment. Tatourian teaches group the road sensors of a lane for the road into a data point, and the data point having complete data, incomplete data, and the partial data, and the road sensors include devices that are malfunctioning on the vehicle and an infrastructure equipment (see at least Tatourian, para. [0015-0016]: Additionally, the traffic analysis server 108 determines the traffic patterns for each road segment 114based on an analysis of the historical vehicle and infrastructure data for that road segment 114 at a given time, or for a given time window (e.g., a one-hour window of time, rush hour, morning, evening, etc.)….The traffic analysis server 108 additionally determines whether the received vehicle data and/or the infrastructure data is indicative of an anomaly, or deviation, from the expected traffic behavior based on the road segment 114 and a present time. To detect the anomaly, the traffic analysis server108 compares the historical traffic patterns to present vehicle data and/or present infrastructure data. & para. [0026]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of group the road sensors of a lane for the road into a data point, and the data point having complete data, incomplete data, and the partial data, and the road sensors include devices that are malfunctioning on the vehicle and an infrastructure equipment of Tatourian, with a reasonable expectation of success, in order for the ECU to successfully navigate the vehicle into and out of the parking spots (see at least Tatourian, para. [0002]). As per claim 14 Guo discloses further comprising: transforming the data point into a map having the graph nodes that includes bound nodes for the road (see at least Guo, para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with ak-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on.). However Guo does not explicitly disclose grouping the road sensors of a lane for the road into a data point, and the data point having complete data, incomplete data, and the partial data, and the road sensors include devices that are malfunctioning on the vehicle and an infrastructure equipment. Tatourian teaches grouping the road sensors of a lane for the road into a data point, and the data point having complete data, incomplete data, and the partial data, and the road sensors include devices that are malfunctioning on the vehicle and an infrastructure equipment (see at least Tatourian, para. [0015-0016]: Additionally, the traffic analysis server 108 determines the traffic patterns for each road segment 114based on an analysis of the historical vehicle and infrastructure data for that road segment 114 at a given time, or for a given time window (e.g., a one-hour window of time, rush hour, morning, evening, etc.)….The traffic analysis server 108 additionally determines whether the received vehicle data and/or the infrastructure data is indicative of an anomaly, or deviation, from the expected traffic behavior based on the road segment 114 and a present time. To detect the anomaly, the traffic analysis server108 compares the historical traffic patterns to present vehicle data and/or present infrastructure data. & para. [0026]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of grouping the road sensors of a lane for the road into a data point, and the data point having complete data, incomplete data, and the partial data, and the road sensors include devices that are malfunctioning on the vehicle and an infrastructure equipment of Tatourian, with a reasonable expectation of success, in order for the ECU to successfully navigate the vehicle into and out of the parking spots (see at least Tatourian, para. [0002]). Claim(s) 4, 15, & 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo, in view of Demiryurek, in view of US 2024/0054321A1 (“Bogaerts”). As per claim 4 Guo discloses wherein the instructions to train the graph model further include instructions to: process data that factors bounds represented by the graph nodes, a length of a feature vector, and channels producing the multi-channel data (see at least Guo, para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with ak-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on.); and minimize a loss of the graph model by comparing the data and the traffic pattern against a ground truth about the partial data (see at least Guo, para. [0048]: In one approach, a traffic system may train the neural network model 510 using a completed, cleaned, or corrected ground-truth G, such as for supervised learning. A module may train the neural network model 510 using perception data. By minimizing the error |Ġ-Ĝ| and back-propagating the derivate, the parameterized encoder-decoder network updates their weights to reach a stable point. The neural network model 510 may use the learned parameters or weights for mapping the data to filter out noise values and fill-in or correct missing values to structure, complete, or clean graphed perception data. In this way, the neural network model 510 may generate the reconstruction matrix Ĝ with satisfactory confidence levels for the completed and cleaned data values when inferred with the noisy and incomplete matrix G.). However Guo does not explicitly wherein the instructions to train the graph model further include instructions to: process historical data, and minimize a loss of the graph model by comparing the historical data and the traffic pattern. Bogaerts teaches wherein the instructions to train the graph model further include instructions to: process historical data that factors bounds represented by the graph nodes, a length of a feature vector, and channels producing the multi-channel data (see at least Bogaerts, para. [0023]: In other words, the processing may further take into account the relation-based traffic representation observed at one or more past time periods. Thus, not only current but also historic relation-based traffic representation may be taken into account during the training of the learning model. & para. [0070]: In this case, at the first convolution iteration, the convolution for the respective nodes is performed in the same manner as described above, taking into account the traffic information from their direct neighbours. In this example, the convolution will be performed for all nodes in the graph. More specifically, the traffic data of S1 will be convolved with the traffic data of S2, the traffic data ofS2 will be convolved with the traffic data of S1 and S6, the traffic data of S6 will be convolved with the traffic data of S2, S3 and S4 and so on. The resulting relation-based traffic representation for the respective nodes will thus contain a partial abstract representation of the traffic information of the direct neighbouring nodes.); and minimize a loss of the graph model by comparing the historical data and the traffic pattern against a ground truth about the data (see at least Bogaerts, para. [0077-0078]: To do so, the learning system then evaluates the correctness of the predicted traffic information, i.e. the values of the features for the respective nodes at the time intervals t=+1, . . . , t=+5, with respect to the actual values of the features for the respective nodes at these time intervals. The evaluation is done, as conventionally, using a loss function which estimates the loss between the predicted and the actual traffic data. Based on the estimated loss, the learning model and, more specifically, the weights used in the weighted average, the convolution weights, and the encoder-decoder weights, are updated by employing, a backpropagation mechanism. This results in updating or, in other words, in training the learning model….The steps of convolving 332, processing 334 and updating 336 are repeated until the learning model achieves a desired level of traffic prediction performance, i.e. until the resulting loss is below a desired level, which marks the completion of the training of the learning model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the instructions to train the graph model further include instructions to: process historical data, and minimize a loss of the graph model by comparing the historical data and the traffic pattern of Bogaerts, with a reasonable expectation of success, in order to allow for a time-efficient and cost-effective training of the learning models (see at least Bogaerts, para. [0021]). As per claim 15 Guo discloses wherein training the graph model further includes: processing data that factors hounds represented by the graph nodes, a length of a feature vector, and channels producing the multi-channel data (see at least Guo, para. [0041]: The traffic system 400 may generate a graph structure from data 410 according to the road network 420 and the traffic information received from SRVs. The graph structure of data 410 may indicate each traffic intersection as a traffic or road link node and vertex 430. For example, the graph structure from data 410 may include V.sub.1-V.sub.20 vertices. Each vertex may be associated with ak-dimension vector or feature that represents reported traffic information from SRVs such as speed, flow rates, density, and so on.); and minimizing a loss of the graph model by comparing the data and the traffic pattern against a ground truth about the partial data (see at least Guo, para. [0048]: In one approach, a traffic system may train the neural network model 510 using a completed, cleaned, or corrected ground-truth G, such as for supervised learning. A module may train the neural network model 510 using perception data. By minimizing the error |Ġ-Ĝ| and back-propagating the derivate, the parameterized encoder-decoder network updates their weights to reach a stable point. The neural network model 510 may use the learned parameters or weights for mapping the data to filter out noise values and fill-in or correct missing values to structure, complete, or clean graphed perception data. In this way, the neural network model 510 may generate the reconstruction matrix Ĝ with satisfactory confidence levels for the completed and cleaned data values when inferred with the noisy and incomplete matrix G.). However Guo does not explicitly disclose wherein training the graph model further includes: processing historical data and minimizing a loss of the graph model by comparing the historical data and the traffic pattern. Bogaerts teaches wherein training the graph model further includes: processing historical data that factors hounds represented by the graph nodes, a length of a feature vector, and channels producing the multi-channel data (see at least Bogaerts, para. [0023]: In other words, the processing may further take into account the relation-based traffic representation observed at one or more past time periods. Thus, not only current but also historic relation-based traffic representation may be taken into account during the training of the learning model. & para. [0070]: In this case, at the first convolution iteration, the convolution for the respective nodes is performed in the same manner as described above, taking into account the traffic information from their direct neighbours. In this example, the convolution will be performed for all nodes in the graph. More specifically, the traffic data of S1 will be convolved with the traffic data of S2, the traffic data ofS2 will be convolved with the traffic data of S1 and S6, the traffic data of S6 will be convolved with the traffic data of S2, S3 and S4 and so on. The resulting relation-based traffic representation for the respective nodes will thus contain a partial abstract representation of the traffic information of the direct neighbouring nodes.); and minimizing a loss of the graph model by comparing the historical data and the traffic pattern against a ground truth about the partial data (see at least Bogaerts, para. [0077-0078]: To do so, the learning system then evaluates the correctness of the predicted traffic information, i.e. the values of the features for the respective nodes at the time intervals t=+1, . . . , t=+5, with respect to the actual values of the features for the respective nodes at these time intervals. The evaluation is done, as conventionally, using a loss function which estimates the loss between the predicted and the actual traffic data. Based on the estimated loss, the learning model and, more specifically, the weights used in the weighted average, the convolution weights, and the encoder-decoder weights, are updated by employing, a backpropagation mechanism. This results in updating or, in other words, in training the learning model….The steps of convolving 332, processing 334 and updating 336 are repeated until the learning model achieves a desired level of traffic prediction performance, i.e. until the resulting loss is below a desired level, which marks the completion of the training of the learning model.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein training the graph model further includes: processing historical data and minimizing a loss of the graph model by comparing the historical data and the traffic pattern of Bogaerts, with a reasonable expectation of success, in order to allow for a time-efficient and cost-effective training of the learning models (see at least Bogaerts, para. [0021]). As per claim 18 Guo discloses further comprising shaping the traffic pattern at an intersection and bound individually by a linearizing layer of the graph model (see at least Guo, para. [0028]: A graph structure 260 may be a graph that illustrates traffic intersections and vehicle flows in a geographic area. & para. [0046]: In the encoding processing, the high-dimensional graph structure from data maybe mapped multiple-times linearly and non-linearly through a layered neural network.). Claim(s) 5, 7 & 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo, in view of Demiryurek, in view of Bogaerts, in view of Liu 999`. As per claim 5 Guo does not explicitly disclose wherein the feature vector represents a number of time intervals and the time-intervals are one of consecutive time-intervals and non-consecutive blocks Liu 999` teaches wherein the feature vector represents a number of time intervals and the time-intervals are one of consecutive time-intervals and non-consecutive blocks (see at least Liu 999` , para. [0032-0033]: After K layers of the MG-MGCN, the vectors XKs,r can be fed to a fully-connected layer to obtain spatial feature vectors of IV) sites, xs,r E JR I v,ixD,, which encodes both the fine-grained and coarse-grained multi-graph spatial correlations. Then xs,r, at multiple time steps, is used for subsequent temporal modeling. This processing can be done in parallel for spatial correlations at fine granularity ( e.g., site-level) and at coarse granularity (e.g., region-level), with separate MG-MGCN networks handling each granularity output….Block 204 performs temporal modeling, where complex temporal correlations also exist in different time steps. For example, the traffic volume of previous weeks, previous days, and previous hours can all affect the traffic volume of an upcoming time slot. To predict the traffic volume accurately, the correlations between previous time steps are discovered and utilized.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the feature vector represents a number of time intervals and the time-intervals are one of consecutive time-intervals and non-consecutive blocks of Liu 999`, with a reasonable expectation of success, in order to enhance the site-level traffic volume prediction (see at least Liu 999`, para. [0016]). As per claim 7 Guo discloses further including instructions to shape the traffic pattern at an intersection and bound individually by a linearizing layer of the graph model (see at least Guo, para. [0028]: A graph structure 260 may be a graph that illustrates traffic intersections and vehicle flows in a geographic area. & para. [0046]: In the encoding processing, the high-dimensional graph structure from data maybe mapped multiple-times linearly and non-linearly through a layered neural network.). As per claim 16 Guo does not explicitly disclose wherein the feature vector represents a number of time-intervals and the time-intervals are one of consecutive time-intervals and non-consecutive blocks. Liu 999` teaches wherein the feature vector represents a number of time-intervals and the time-intervals are one of consecutive time-intervals and non-consecutive blocks (see at least Liu 999` , para. [0032-0033]: After K layers of the MG-MGCN, the vectors XKs,r can be fed to a fully-connected layer to obtain spatial feature vectors of IV) sites, xs,r E JR I v,ixD,, which encodes both the fine-grained and coarse-grained multi-graph spatial correlations. Then xs,r, at multiple time steps, is used for subsequent temporal modeling. This processing can be done in parallel for spatial correlations at fine granularity ( e.g., site-level) and at coarse granularity (e.g., region-level), with separate MG-MGCN networks handling each granularity output….Block 204 performs temporal modeling, where complex temporal correlations also exist in different time steps. For example, the traffic volume of previous weeks, previous days, and previous hours can all affect the traffic volume of an upcoming time slot. To predict the traffic volume accurately, the correlations between previous time steps are discovered and utilized.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the feature vector represents a number of time-intervals and the time-intervals are one of consecutive time-intervals and non-consecutive blocks of Liu 999`, with a reasonable expectation of success, in order to enhance the site-level traffic volume prediction (see at least Liu 999`, para. [0016]). Claim(s) 8 & 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo, in view of Demiryurek, in view of US 2018/0293884A1 (“Liu 884` ”). As per claim 8 Guo does not explicitly disclose wherein the timing data indicates any one of vehicle prioritization, vehicle platooning, pedestrian crossings, and fractional phases of a traffic light associated with a time-interval that represents abnormal conditions for the traffic Liu 884` teaches wherein the timing data indicates any one of vehicle prioritization, vehicle platooning, pedestrian crossings, and fractional phases of a traffic light associated with a time-interval that represents abnormal conditions for the traffic (see at least Liu 884`, para. [0083]: The signal phase and timing (SPaT) data broadcast by the RSEs have also been collected at deployed intersections. The SPaT data contain information of signal status that can be used as the input for "signal aware" CV applications, e.g., red light violation warning or eco-approach/ departure assistance. Here, only a portion of the data fields in the SPaT are used, including: timestamp when a message was generated, signal phase ID and signal status. A sample of SPaT data is shown in FIG. 5.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the timing data indicates any one of vehicle prioritization, vehicle platooning, pedestrian crossings, and fractional phases of a traffic light associated with a time-interval that represents abnormal conditions for the traffic of Liu 884`, with a reasonable expectation of success, in order to operate traffic control devices for increased efficiency of traffic flow (see at least Liu 884`, para. [0002]). As per claim 19 Guo does not explicitly disclose wherein the timing data indicates any one of vehicle prioritization, vehicle platooning, pedestrian crossings, and fractional phases of a traffic light associated with a time-interval that represents abnormal conditions for the traffic. Liu 884` teaches wherein the timing data indicates any one of vehicle prioritization, vehicle platooning, pedestrian crossings, and fractional phases of a traffic light associated with a time-interval that represents abnormal conditions for the traffic (see at least Liu 884`, para. [0083]: The signal phase and timing (SPaT) data broadcast by the RSEs have also been collected at deployed intersections. The SPaT data contain information of signal status that can be used as the input for "signal aware" CV applications, e.g., red light violation warning or eco-approach/ departure assistance. Here, only a portion of the data fields in the SPaT are used, including: timestamp when a message was generated, signal phase ID and signal status. A sample of SPaT data is shown in FIG. 5.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the timing data indicates any one of vehicle prioritization, vehicle platooning, pedestrian crossings, and fractional phases of a traffic light associated with a time-interval that represents abnormal conditions for the traffic of Liu 884`, with a reasonable expectation of success, in order to operate traffic control devices for increased efficiency of traffic flow (see at least Liu 884`, para. [0002]). Claim(s) 9 & 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guo, in view of Demiryurek, in view of Tatourian, in view of Liu 884`. As per claim 9 Guo does not explicitly disclose wherein the road sensors are infrastructure sensors that are malfunctioning, the SPAT for a signalized intersection associated with the road, and the traffic pattern is traffic volume. Tatourian teaches wherein the road sensors are infrastructure sensors that are malfunctioning, the timing data is for a signalized intersection associated with the road, and the traffic pattern is traffic volume (see at least Tatourian, para. [0015-0016]: Additionally, the traffic analysis server 108 determines the traffic patterns for each road segment 114based on an analysis of the historical vehicle and infrastructure data for that road segment 114 at a given time, or for a given time window (e.g., a one-hour window of time, rush hour, morning, evening, etc.)….The traffic analysis server 108 additionally determines whether the received vehicle data and/or the infrastructure data is indicative of an anomaly, or deviation, from the expected traffic behavior based on the road segment 114 and a present time. To detect the anomaly, the traffic analysis server108 compares the historical traffic patterns to present vehicle data and/or present infrastructure data. & para. [0026]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the road sensors are infrastructure sensors that are malfunctioning, the timing data is for a signalized intersection associated with the road, and the traffic pattern is traffic volume of Tatourian, with a reasonable expectation of success, in order for the ECU to successfully navigate the vehicle into and out of the parking spots (see at least Tatourian, para. [0002]). Liu 884` teaches wherein the road sensors are infrastructure sensors, the SPAT information includes data for a signalized intersection associated with the road, and the traffic pattern is traffic volume (see at least Liu 884`, para. [0083-0085]: The signal phase and timing (SPaT) data broadcast by the RSEs have also been collected at deployed intersections. The SPaT data contain information of signal status that can be used as the input for "signal aware" CV applications, e.g., red light violation warning or eco-approach/ departure assistance. Here, only a portion of the data fields in the SPaT are used, including: timestamp when a message was generated, signal phase ID and signal status. A sample of SPaT data is shown in FIG. 5….By combining these arrival information from vehicle trajectories, volume of overall vehicle arrivals can be estimated….For the traffic volume estimation method, the inputs include vehicle trajectories (e.g., which can be generated from two or more TL data) approaching to an intersection as well as traffic signal status (or statuses). In other embodiments, the estimation method can take into account other data, such as other TL data, data from RSEs, data from mobile devices, or data from other devices that may be in communication with the remote facility 16, which may carry out at least part of the method discussed herein.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the road sensors are infrastructure sensors, the SPAT information includes data for a signalized intersection associated with the road, and the traffic pattern is traffic volume of Liu 884`, with a reasonable expectation of success, in order to operate traffic control devices for increased efficiency of traffic flow (see at least Liu 884`, para. [0002]). As per claim 20 Guo does not explicitly disclose wherein the road sensors are infrastructure sensors that are malfunctioning, the SPAT information includes data for a signalized intersection associated with the road, and the traffic pattern is traffic volume. Tatourian teaches wherein the road sensors are infrastructure sensors that are malfunctioning, the timing data is for a signalized intersection associated with the road, and the traffic pattern is traffic volume (see at least Tatourian, para. [0015-0016]: Additionally, the traffic analysis server 108 determines the traffic patterns for each road segment 114based on an analysis of the historical vehicle and infrastructure data for that road segment 114 at a given time, or for a given time window (e.g., a one-hour window of time, rush hour, morning, evening, etc.)….The traffic analysis server 108 additionally determines whether the received vehicle data and/or the infrastructure data is indicative of an anomaly, or deviation, from the expected traffic behavior based on the road segment 114 and a present time. To detect the anomaly, the traffic analysis server108 compares the historical traffic patterns to present vehicle data and/or present infrastructure data. & para. [0026]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the road sensors are infrastructure sensors that are malfunctioning, the timing data is for a signalized intersection associated with the road, and the traffic pattern is traffic volume of Tatourian, with a reasonable expectation of success, in order for the ECU to successfully navigate the vehicle into and out of the parking spots (see at least Tatourian, para. [0002]). Liu 884` teaches wherein the road sensors are infrastructure sensors, the SPAT information includes data for a signalized intersection associated with the road, and the traffic pattern is traffic volume (see at least Liu 884`, para. [0083-0085]: The signal phase and timing (SPaT) data broadcast by the RSEs have also been collected at deployed intersections. The SPaT data contain information of signal status that can be used as the input for "signal aware" CV applications, e.g., red light violation warning or eco-approach/ departure assistance. Here, only a portion of the data fields in the SPaT are used, including: timestamp when a message was generated, signal phase ID and signal status. A sample of SPaT data is shown in FIG. 5….By combining these arrival information from vehicle trajectories, volume of overall vehicle arrivals can be estimated….For the traffic volume estimation method, the inputs include vehicle trajectories (e.g., which can be generated from two or more TL data) approaching to an intersection as well as traffic signal status (or statuses). In other embodiments, the estimation method can take into account other data, such as other TL data, data from RSEs, data from mobile devices, or data from other devices that may be in communication with the remote facility 16, which may carry out at least part of the method discussed herein.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Guo to incorporate the teaching of wherein the road sensors are infrastructure sensors, the SPAT information includes data for a signalized intersection associated with the road, and the traffic pattern is traffic volume of Liu 884`, with a reasonable expectation of success, in order to operate traffic control devices for increased efficiency of traffic flow (see at least Liu 884`, para. [0002]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED ABDO ALGEHAIM whose telephone number is (571)272-3628. The examiner can normally be reached Monday-Friday 8-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fadey Jabr can be reached at 571-272-1516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMED ABDO ALGEHAIM/Primary Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

Nov 10, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §103
Dec 08, 2025
Interview Requested
Dec 18, 2025
Examiner Interview Summary
Dec 29, 2025
Response Filed
Apr 01, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594963
DETECTING AN UNKNOWN OBJECT BY A LEAD AUTONOMOUS VEHICLE (AV) AND UPDATING ROUTING PLANS FOR FOLLOWING AVs
2y 5m to grant Granted Apr 07, 2026
Patent 12597865
INVERTER
2y 5m to grant Granted Apr 07, 2026
Patent 12589978
TRUCK-TABLET INTERFACE
2y 5m to grant Granted Mar 31, 2026
Patent 12565235
DETECTING A CONSTRUCTION ZONE BY A LEAD AUTONOMOUS VEHICLE (AV) AND UPDATING ROUTING PLANS FOR FOLLOWING AVs
2y 5m to grant Granted Mar 03, 2026
Patent 12559228
THERMAL MANAGEMENT SYSTEM FOR AN AIRCRAFT INCLUDING AN ELECTRIC PROPULSION ENGINE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
81%
With Interview (+21.9%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 207 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month