Prosecution Insights
Last updated: April 19, 2026
Application No. 18/612,114

METHOD/SYSTEM/COMPUTER PROGRAM FOR BSM/RTCM/SCMS ENABLED GROUND TRUTH RUN TIME PERCEPTION

Non-Final OA §101§103
Filed
Mar 21, 2024
Examiner
DOUGLAS, SHANE EMANUEL
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sonamore Inc. Dba P3Mobility
OA Round
1 (Non-Final)
17%
Grant Probability
At Risk
1-2
OA Rounds
2y 4m
To Grant
39%
With Interview

Examiner Intelligence

Grants only 17% of cases
17%
Career Allow Rate
2 granted / 12 resolved
-35.3% vs TC avg
Strong +22% interview lift
Without
With
+22.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
44 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
59.4%
+19.4% vs TC avg
§102
30.3%
-9.7% vs TC avg
§112
2.5%
-37.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Application Status This Office action has been issued in response to application filed on 10/28/2024. Claims 1-12 are pending. Claims 1- 12 are rejected. Information Disclosure Statement The information disclosure statements (IDS) submitted on 07/30/2024 & 03/21/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Priority Acknowledgment is made of applicant’s claim for priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. PRO 63/453,592, filed on 03/21/2023. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the judicial exception abstract idea and does not amount to significantly more than the exception itself. Step 1: Claims 1-12 recite a process and a machine and therefore fall within one of the four statutory categories of invention. Step 2A Prong 1: Each claim recites a judicial exception in the form of an abstract idea. The claims recite, receiving predicted object data from sensors and message data from V2X sources, parsing and normalizing fields such as position, and dimensions. Such acts represent evaluating and analyzing information and drawing conclusions; activities that fall within the abstract idea of mathematical concepts, mental processes, and organizing human activity. All other independent claims recite the same operations being implemented in software. Furthermore, the dependent claims merely add routine data processing refinements such as synchronization or coordinate transformation techniques that remain mental acts or mathematical concepts. Each claim is directed to an abstract idea, specifically a mental process and mathematical concept. Accordingly claims 1-12 recite an abstract idea. Step 2A Prong 2: Claims 1-12’s judicial exception is not integrated into a practical application. The claims do not contain an inventive concept sufficient to transform the claimed abstract idea into patent eligible subject matter. The steps described can be carried out using generic sensors such as cameras, Lidar and radar in receipt of SAE J2735 BSM/CAM messages authenticated via SCMS/IEEE 1609, and are claimed at a high level of generality. Without detailing a particular improvement to the function of the technology itself. The specification and claims do not demonstrate specific improvements to processing, data collection beyond the conventional use of those well-known in the art. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are directed to an abstract idea. Step 2B: Claims 1-12 taken individually or collectively do not include additional elements that are sufficient to amount to significantly more than the judicial exception. There is no recitation of a that the components are used in an unconventional manner. The disclosed steps would be considered routine to a skilled artisan in data mining and analyses. Thus, the claims do not add any inventive concept to transform the nature of the claims into a patent eligible application. Claims 1-12 are not patent eligible. Accordingly, the Examiner concludes that there are no meaningful limitations in claims 1-12 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself. The analysis above applies to all statutory categories of invention. As such, the presentment of claims 1-12, otherwise styled as other means, would be subject to the same analysis. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-7, and 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Miucic et al. (US20230059897A1) in view of Jha et al. (US20220172609A1), further in view of Katz et al (US20220013008A1), further in view of ETSI (ETSI TS 103 324 V2.1.1), further in view of ETSI (ETSI EN 302 637-2 V1.4.1), further in view of Baek et al. (Driving Environment Perception Based on the Fusion of Vehicular Wireless Communications and Automotive Remote Sensors). Regarding claim 1, Miucic discloses, a system comprising: one or more processors programmed or configured to: receive data associated with road environment object-actor sensor based detection and classification predictions (0033, the host vehicle may then execute V2X applications 28 with inputs from the LDM, including objects detected from V2X communications 25 as well as objects detected by vision sensor(s)) … (0030, a computer vision system 10 may detect objects and send messages (e.g., over ethernet or the host vehicle controller area network (CAN) 14) to the V2X OBU with the following information: detected object type (e.g., passenger vehicle, truck, pedestrian, motorcycle, bicycle, traffic light, etc.); relative position of detected objects; estimated speed and/or acceleration of detected objects; estimated heading of detected objects; other attributes such as turn signal, interpreted hand gesture, open door, etc.), and IEEE 1609 standard Security Credential Management System (SCMS) (0022, It transmits messages known as Cooperative Awareness Messages (CAM) and Decentralized Environmental Notification Messages (DENM) or Basic Safety Message (BSM). The data volume of these messages is very low. The radio technology is part of the WLAN 802.11 family of standards developed by the Institute of Electrical and Electronics Engineers (IEEE) and known in the United States as Wireless Access in Vehicular Environments (WAVE) and in Europe as ITS-G5), Basic Safety Messages (BSM) and/or Cooperative Awareness Messages (CAM); (0026, as is known to those of ordinary skill in the art (see, e.g., SAE J2945), all V2X communications may include a Basic Safety Message (BSM) or a Cooperative Awareness Message (CAM)), and determine run-time ground truth perception based on reconciled and matched data artifacts constructed in a system relative coordinate reference map associated from BSM/CAM reported object-actor data (0033, a host vehicle may convert 22 vision detected objects from a relative (i.e., camera) coordinate system to an absolute coordinate system based on the host vehicle position 18, host vehicle data 20, and detected road user data 16) … (0033, the host vehicle may also transmit 30 proxy-BSM or Collective Perception Messages (CPM) for road users not equipped with V2X communication systems and also transmit 32 host vehicle BSMs or Cooperative Awareness Messages (CAM)), and predicted object-actors data artifacts, (0030, a computer vision system 10 may detect objects and send messages (e.g., over ethernet or the host vehicle controller area network (CAN) 14) to the V2X OBU with the following information: detected object type (e.g., passenger vehicle, truck, pedestrian, motorcycle, bicycle, traffic light, etc.); relative position of detected objects; estimated speed and/or acceleration of detected objects; estimated heading of detected objects), and wherein after the determined ground truth object-actor artifact is constructed the precursor artifacts used to make the match are removed from the reference map, (0033, the host vehicle may compile 24 a local dynamic map (LDM) of all road users as a list using object data 25 from V2X equipped road users and iterate 26 through the LDM to exclude duplicates). However, Miucic does not many disclose aspects of claim 1. Nevertheless, Jha who is in the same field of endeavor of multi-access edge computing for roadside units discloses, cryptographically signed SAE J2735 standard Basic Safety Messages (BSM) (0006, an RSU transmits and receive messages as defined in Society of Automotive Engineers (SAE) Standard J2735. In the case of the Situation Data Clearinghouse and Situation Data Warehouse), wherein for the determining run-time ground truth perception, the one or more processors are further programmed and configured to: extract object-actor associated data in the received SCMS cryptographically signed SAE J2735 participant broadcasted Basic Safety Messages/Cooperative Awareness Messages (0006, this allows devices to take advantage of services such as the United States Department of Transportation (USDOT) Situation Data Clearinghouse, Situation Data Warehouse, and Security Credential Management System (SCMS) as well as other public and private network services. Secondly, an RSU transmits and receive messages as defined in Society of Automotive Engineers (SAE) Standard J2735), from which an object-actor's pose is determined, wherein the received data includes but is not limited to a reported object-actor's real-time kinematic GNSS corrected position, (0014, the NTCIP protocol is extended: (i) to define the secure transmission of lane-centric, 3-dimensional road information, traffic statistics, weather conditions, GNSS coordinates, and (ii) to facilitate an RSU to communicate such information with the management center for such purposes). It would be obvious to one skilled in the art to combine Miucic and Jha, to use Jha’s well known SCMS signed J2735 intake within Miucic’s fusion system so that the same processors that already reconcile sensor objects with V2X objects also operate on authenticated BSM/CAM payloads and extract pose fields into the map. Doing so would apply a standard J2735 parsing which are routine in V2X systems. Further justification for combining Miucic and Jha not only comes from the state of the art but from Miucic (0025, as those skilled in the art will understand, the communication units (including transmitters, receivers, and antennas), controllers, control units, systems, subsystems, units, modules, interfaces, sensors, devices, components, or the like utilized for, in, or as part of V2X communication systems and/or otherwise described herein may individually, collectively, or in any combination comprise appropriate circuitry). However, even the combination of Miucic and Jha does not disclose aspects of claim 1. Furthermore, ETSI (ETSI TS 103 324 V2.1.1) who is in the same field of endeavor of intelligent transport systems for vehicular communications discloses spatial dimensions, including a length, a width, and/or a height (7.1.8.4, Object dimensions and age, one or more of the components objectDimensionX, objectDimensionY, objectDimensionZ may be included for each object, indicating the size of the bounding box of the perceived object in each direction). It would be obvious to one skilled in the art to combine the combination of Miucic and Jha with ETSI (ETSI TS 103 324 V2.1.1) to parse CPM fields such as classification and size alongside BSM/CAM in Miucic’s map. This is an adoption of standardized object descriptors to improve interoperability which would yield object artifacts with category and dimensions in the runtime map as claimed. However, even the combination of Miucic, Jha, and ETSI (ETSI TS 103 324 V2.1.1) does not disclose aspects of claim 1. Furthermore, ETIS (ETSI EN 302 637-2 V1.4.1) who is in the same field of endeavor of intelligent transport systems for vehicular communications discloses, the categorical-type classification of the object-actor (4.1 Background, the status information includes time, position, motion state, activated systems, etc. and the attribute information includes data about the dimensions, vehicle type and role in the road traffic, etc.), and wherein the BSM/CAM extracted data is transformed into a format to store and retrieve the represented object-actor as an artifact within a system relative coordinate reference map (4.4, upon receiving a CAM, the CA basic service makes the content of the CAM available to the ITS applications and/or to other facilities within the receiving ITS-S, such as a Local Dynamic Map (LDM)). It would be obvious to one skilled in the art to combine the combination of Miucic, Jha, and ETSI (ETSI TS 103 324 V2.1.1) with ETSI (ETSI TS 103 324 V2.1.1) to include ETSI’s category/type field and insert it along with other CAM fields. However, even the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1), and ETSI (ETSI TS 103 324 V2.1.1) does not disclose aspects of claim 1. However, Katz who is in the same field of endeavor of using v2x and sensor data discloses, predicting object-actors presence and pose present within the fields of view of the one or more sensors of the system; wherein the predicted object-actor output is sent to the system relative coordinate reference map as predicted object-actor artifact (0078, processing unit 104 can use the transformation function, e.g., as described above, to create a virtual map from the user parameters calculated from inputs from the sensor 102, such as, location and/or pose in pixels/point cloud space, classification, speed, acceleration, bearing and past and predicted trajectory); when no sensor associated object-actor artifacts are identified as a match to a BSM/CAM artifact an unmatched BSM/CAM is the basis for constituting a ground truth perception object-actor, (0063, processing unit 104 may create a message for each non-connected road user detected in step 126. The message (e.g. BSM and/or CAM and/or PSM, in current standards) typically contains the calculated user parameters (e.g., location, speed, acceleration, bearing, classification, past and predicted trajectory, etc.) and can be broadcasted via the V2X communication module 103 modem at the required frequency (e.g. lOhz for vehicles, 2hz for pedestrians), to all connected road users in the vicinity of the site), the outcome of the matched artifacts and map update is run-time ground truth perception, (0068, a virtual map may be created (e.g., calculated) periodically (e.g., at a predetermined frequency). In some embodiments the virtual map may be a dynamic virtual map that is updated periodically), wherein the system relative coordinate reference map updates to reflect the outcome state for subsequent system runtime use and/or external transmission of ground truth data (0063, processing unit 104 may create a message for each non-connected road user detected in step 126. The message (e.g. BSM and/or CAM and/or PSM, in current standards) typically contains the calculated user parameters (e.g., location, speed, acceleration, bearing, classification, past and predicted trajectory, etc.)); and persist the matched object-actors artifacts as ground truth object-actor data artifacts in the system relative coordinate map which subsequently are logged with cross-matched and reconciled data associated with Basic Safety Messages/Cooperative Awareness Messages and road environment sensor-perception data (0067, the list of total road users is matched to a list of connected road-users and at least the locations of the connected and non-connected road users are determined. [0062] Processing unit 104 may create a virtual map using the determined locations of the connected and non-connected road users). It would be obvious to one skilled in the art to combine the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1) and ETSI (ETSI TS 103 324 V2.1.1) with Katz to persists the matched objects as the canonical ground truth entries together with their provenance cross matches BSM/CAM fields and associated sensor perception features for analytics and retraining. The fused truth with linked V2X and sensor features is predictable next step in Miucic’s system. However, even the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI TS 103 324 V2.1.1), and Katz does not disclose aspects of claim 1. Finally, Baek who is in the same field of endeavor of perception based on the fusion of vehicular wireless communications and automotive remote sensors discloses, determine the existence of two or more artifacts loaded in the system relative coordinate reference map that correspond to the same object-actor in the road environment (Abstract, a track-to-track fusion of high-level sensor data and vehicular wireless communication data was performed to accurately and reliably locate the remote target in the vehicle surroundings and predict the future trajectory), where a minimum of one of the two more artifacts is associated with BSM/CAM data upon which one or more algorithms and or machine learning models establishes the existence of a match between two or more object-actor representative artifacts (2.3. V2X Communications, the vehicle state information is broadcast and shared among the vehicles equipped with DSRC devices by exchanging the BSM, which is defined in the SAE J2735 message set dictionary [29]. The BSM contains data obtained from the vehicle CAN bus and the GNSS receiver), wherein a matched set of artifacts constitutes a ground truth object-actor in the system reference map (3.2. Data Fusion, each sensor system outputs one or more tracks based on the sensor measurements, and the state estimates from multiple sensor tracks are associated and combined with a track-to-track fusion algorithm). It would be obvious to one skilled in the art to combine the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI TS 103 324 V2.1.1) and Katz with Baek. The previously disclosed standard fusion logic implemented into an existing fusion map stack would reduce duplicates and improve accuracy with no architectural changes. Regarding claim 2, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose the system of claim 1, as discussed supra. Additionally, Katz discloses, the one or more processors are further programmed or configured to: determine the ground truth velocity of an object-actor based on proceeding established run-time ground truth perception data in series, (0078, the map may also include information relating to parameters (such as location, pose, classification, speed, acceleration, bearing and past and predicted trajectory) of connected users, who are not within the FOV of sensor 102. This information will typically be received from the V2X communication module 103, whereas information relating to parameters of a connected user who is within the FOV of sensor 102, will include information from both sensor 102 and V2X communication module 103). Regarding claim 3, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose the system of claim 1, as discussed supra. Additionally, Katz discloses, the one or more processors are further programmed or configured to: determine the ground truth acceleration of an object-actor based on proceeding established run-time ground truth perception data in series (0078, the map may also include information relating to parameters (such as location, pose, classification, speed, acceleration, bearing and past and predicted trajectory) of connected users, who are not within the FOV of sensor 102. This information will typically be received from the V2X communication module 103, whereas information relating to parameters of a connected user who is within the FOV of sensor 102, will include information from both sensor 102 and V2X communication module 103). Regarding claim 5, Miucic discloses, a method comprising: receive data associated with road environment object-actor sensor based detection and classification predictions (0033, the host vehicle may then execute V2X applications 28 with inputs from the LDM, including objects detected from V2X communications 25 as well as objects detected by vision sensor(s)) … (0030, a computer vision system 10 may detect objects and send messages (e.g., over ethernet or the host vehicle controller area network (CAN) 14) to the V2X OBU with the following information: detected object type (e.g., passenger vehicle, truck, pedestrian, motorcycle, bicycle, traffic light, etc.); relative position of detected objects; estimated speed and/or acceleration of detected objects; estimated heading of detected objects; other attributes such as turn signal, interpreted hand gesture, open door, etc.), and IEEE 1609 standard Security Credential Management System (SCMS) (0022, It transmits messages known as Cooperative Awareness Messages (CAM) and Decentralized Environmental Notification Messages (DENM) or Basic Safety Message (BSM). The data volume of these messages is very low. The radio technology is part of the WLAN 802.11 family of standards developed by the Institute of Electrical and Electronics Engineers (IEEE) and known in the United States as Wireless Access in Vehicular Environments (WAVE) and in Europe as ITS-G5), Basic Safety Messages (BSM) and/or Cooperative Awareness Messages (CAM); (0026, as is known to those of ordinary skill in the art (see, e.g., SAE J2945), all V2X communications may include a Basic Safety Message (BSM) or a Cooperative Awareness Message (CAM)), and determine run-time ground truth perception based on reconciled and matched data artifacts constructed in a system relative coordinate reference map associated from BSM/CAM reported object-actor data (0033, a host vehicle may convert 22 vision detected objects from a relative (i.e., camera) coordinate system to an absolute coordinate system based on the host vehicle position 18, host vehicle data 20, and detected road user data 16) … (0033, the host vehicle may also transmit 30 proxy-BSM or Collective Perception Messages (CPM) for road users not equipped with V2X communication systems and also transmit 32 host vehicle BSMs or Cooperative Awareness Messages (CAM)), and predicted object-actors data artifacts, (0030, a computer vision system 10 may detect objects and send messages (e.g., over ethernet or the host vehicle controller area network (CAN) 14) to the V2X OBU with the following information: detected object type (e.g., passenger vehicle, truck, pedestrian, motorcycle, bicycle, traffic light, etc.); relative position of detected objects; estimated speed and/or acceleration of detected objects; estimated heading of detected objects), and wherein after the determined ground truth object-actor artifact is constructed the precursor artifacts used to make the match are removed from the reference map, (0033, the host vehicle may compile 24 a local dynamic map (LDM) of all road users as a list using object data 25 from V2X equipped road users and iterate 26 through the LDM to exclude duplicates). However, Miucic does not many disclose aspects of claim 5. Nevertheless, Jha who is in the same field of endeavor of multi-access edge computing for roadside units discloses, cryptographically signed SAE J2735 standard Basic Safety Messages (BSM) (0006, an RSU transmits and receive messages as defined in Society of Automotive Engineers (SAE) Standard J2735. In the case of the Situation Data Clearinghouse and Situation Data Warehouse), wherein for the determining run-time ground truth perception, the one or more processors are further programmed and configured to: extract object-actor associated data in the received SCMS cryptographically signed SAE J2735 participant broadcasted Basic Safety Messages/Cooperative Awareness Messages (0006, this allows devices to take advantage of services such as the United States Department of Transportation (USDOT) Situation Data Clearinghouse, Situation Data Warehouse, and Security Credential Management System (SCMS) as well as other public and private network services. Secondly, an RSU transmits and receive messages as defined in Society of Automotive Engineers (SAE) Standard J2735), from which an object-actor's pose is determined, wherein the received data includes but is not limited to a reported object-actor's real-time kinematic GNSS corrected position, (0014, the NTCIP protocol is extended: (i) to define the secure transmission of lane-centric, 3-dimensional road information, traffic statistics, weather conditions, GNSS coordinates, and (ii) to facilitate an RSU to communicate such information with the management center for such purposes). However, even the combination of Miucic and Jha does not disclose aspects of claim 5. Furthermore, ETSI (ETSI TS 103 324 V2.1.1) who is in the same field of endeavor of intelligent transport systems for vehicular communications discloses spatial dimensions, including a length, a width, and/or a height (7.1.8.4, Object dimensions and age, one or more of the components objectDimensionX, objectDimensionY, objectDimensionZ may be included for each object, indicating the size of the bounding box of the perceived object in each direction). However, even the combination of Miucic, Jha, and ETSI (ETSI TS 103 324 V2.1.1) does not disclose aspects of claim 5. Furthermore, ETIS (ETSI EN 302 637-2 V1.4.1) who is in the same field of endeavor of intelligent transport systems for vehicular communications discloses, the categorical-type classification of the object-actor (4.1 Background, the status information includes time, position, motion state, activated systems, etc. and the attribute information includes data about the dimensions, vehicle type and role in the road traffic, etc.), and wherein the BSM/CAM extracted data is transformed into a format to store and retrieve the represented object-actor as an artifact within a system relative coordinate reference map (4.4, upon receiving a CAM, the CA basic service makes the content of the CAM available to the ITS applications and/or to other facilities within the receiving ITS-S, such as a Local Dynamic Map (LDM)). However, even the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1), and ETSI (ETSI TS 103 324 V2.1.1) does not disclose aspects of claim 5. However, Katz who is in the same field of endeavor of using v2x and sensor data discloses, predicting object-actors presence and pose present within the fields of view of the one or more sensors of the system; wherein the predicted object-actor output is sent to the system relative coordinate reference map as predicted object-actor artifact (0078, processing unit 104 can use the transformation function, e.g., as described above, to create a virtual map from the user parameters calculated from inputs from the sensor 102, such as, location and/or pose in pixels/point cloud space, classification, speed, acceleration, bearing and past and predicted trajectory); when no sensor associated object-actor artifacts are identified as a match to a BSM/CAM artifact an unmatched BSM/CAM is the basis for constituting a ground truth perception object-actor, (0063, processing unit 104 may create a message for each non-connected road user detected in step 126. The message (e.g. BSM and/or CAM and/or PSM, in current standards) typically contains the calculated user parameters (e.g., location, speed, acceleration, bearing, classification, past and predicted trajectory, etc.) and can be broadcasted via the V2X communication module 103 modem at the required frequency (e.g. lOhz for vehicles, 2hz for pedestrians), to all connected road users in the vicinity of the site), the outcome of the matched artifacts and map update is run-time ground truth perception, (0068, a virtual map may be created (e.g., calculated) periodically (e.g., at a predetermined frequency). In some embodiments the virtual map may be a dynamic virtual map that is updated periodically), wherein the system relative coordinate reference map updates to reflect the outcome state for subsequent system runtime use and/or external transmission of ground truth data (0063, processing unit 104 may create a message for each non-connected road user detected in step 126. The message (e.g. BSM and/or CAM and/or PSM, in current standards) typically contains the calculated user parameters (e.g., location, speed, acceleration, bearing, classification, past and predicted trajectory, etc.)); and persist the matched object-actors artifacts as ground truth object-actor data artifacts in the system relative coordinate map which subsequently are logged with cross-matched and reconciled data associated with Basic Safety Messages/Cooperative Awareness Messages and road environment sensor-perception data (0067, the list of total road users is matched to a list of connected road-users and at least the locations of the connected and non-connected road users are determined. [0062] Processing unit 104 may create a virtual map using the determined locations of the connected and non-connected road users). However, even the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI TS 103 324 V2.1.1), and Katz does not disclose aspects of claim 5. Finally, Baek who is in the same field of endeavor of perception based on the fusion of vehicular wireless communications and automotive remote sensors discloses, determine the existence of two or more artifacts loaded in the system relative coordinate reference map that correspond to the same object-actor in the road environment (Abstract, a track-to-track fusion of high-level sensor data and vehicular wireless communication data was performed to accurately and reliably locate the remote target in the vehicle surroundings and predict the future trajectory), where a minimum of one of the two more artifacts is associated with BSM/CAM data upon which one or more algorithms and or machine learning models establishes the existence of a match between two or more object-actor representative artifacts (2.3. V2X Communications, the vehicle state information is broadcast and shared among the vehicles equipped with DSRC devices by exchanging the BSM, which is defined in the SAE J2735 message set dictionary [29]. The BSM contains data obtained from the vehicle CAN bus and the GNSS receiver), wherein a matched set of artifacts constitutes a ground truth object-actor in the system reference map (3.2. Data Fusion, each sensor system outputs one or more tracks based on the sensor measurements, and the state estimates from multiple sensor tracks are associated and combined with a track-to-track fusion algorithm). Regarding claim 6, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose the method of claim 5, as discussed supra. Additionally, Katz discloses, the one or more processors are further programmed or configured to: determine the ground truth velocity of an object-actor based on proceeding established run-time ground truth perception data in series, (0078, the map may also include information relating to parameters (such as location, pose, classification, speed, acceleration, bearing and past and predicted trajectory) of connected users, who are not within the FOV of sensor 102. This information will typically be received from the V2X communication module 103, whereas information relating to parameters of a connected user who is within the FOV of sensor 102, will include information from both sensor 102 and V2X communication module 103). Regarding claim 7, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose the method of claim 5, as discussed supra. Additionally, Katz discloses, the one or more processors are further programmed or configured to: determine the ground truth acceleration of an object-actor based on proceeding established run-time ground truth perception data in series (0078, the map may also include information relating to parameters (such as location, pose, classification, speed, acceleration, bearing and past and predicted trajectory) of connected users, who are not within the FOV of sensor 102. This information will typically be received from the V2X communication module 103, whereas information relating to parameters of a connected user who is within the FOV of sensor 102, will include information from both sensor 102 and V2X communication module 103). Regarding claim 9, Miucic discloses at least one non-transitory computer readable medium storing at least one computer program product that comprises one or more instructions that cause at least one processor to perform operations, comprising: receiving data associated with road environment object-actor sensor based detection and classification predictions (0033, the host vehicle may then execute V2X applications 28 with inputs from the LDM, including objects detected from V2X communications 25 as well as objects detected by vision sensor(s)) … (0030, a computer vision system 10 may detect objects and send messages (e.g., over ethernet or the host vehicle controller area network (CAN) 14) to the V2X OBU with the following information: detected object type (e.g., passenger vehicle, truck, pedestrian, motorcycle, bicycle, traffic light, etc.); relative position of detected objects; estimated speed and/or acceleration of detected objects; estimated heading of detected objects; other attributes such as turn signal, interpreted hand gesture, open door, etc.), and IEEE 1609 standard Security Credential Management System (SCMS) (0022, It transmits messages known as Cooperative Awareness Messages (CAM) and Decentralized Environmental Notification Messages (DENM) or Basic Safety Message (BSM). The data volume of these messages is very low. The radio technology is part of the WLAN 802.11 family of standards developed by the Institute of Electrical and Electronics Engineers (IEEE) and known in the United States as Wireless Access in Vehicular Environments (WAVE) and in Europe as ITS-G5), Basic Safety Messages (BSM) and/or Cooperative Awareness Messages (CAM); (0026, as is known to those of ordinary skill in the art (see, e.g., SAE J2945), all V2X communications may include a Basic Safety Message (BSM) or a Cooperative Awareness Message (CAM)), and determine run-time ground truth perception based on reconciled and matched data artifacts constructed in a system relative coordinate reference map associated from BSM/CAM reported object-actor data (0033, a host vehicle may convert 22 vision detected objects from a relative (i.e., camera) coordinate system to an absolute coordinate system based on the host vehicle position 18, host vehicle data 20, and detected road user data 16) … (0033, the host vehicle may also transmit 30 proxy-BSM or Collective Perception Messages (CPM) for road users not equipped with V2X communication systems and also transmit 32 host vehicle BSMs or Cooperative Awareness Messages (CAM)), and predicted object-actors data artifacts, (0030, a computer vision system 10 may detect objects and send messages (e.g., over ethernet or the host vehicle controller area network (CAN) 14) to the V2X OBU with the following information: detected object type (e.g., passenger vehicle, truck, pedestrian, motorcycle, bicycle, traffic light, etc.); relative position of detected objects; estimated speed and/or acceleration of detected objects; estimated heading of detected objects), and wherein after the determined ground truth object-actor artifact is constructed the precursor artifacts used to make the match are removed from the reference map, (0033, the host vehicle may compile 24 a local dynamic map (LDM) of all road users as a list using object data 25 from V2X equipped road users and iterate 26 through the LDM to exclude duplicates). However, Miucic does not many disclose aspects of claim 9. Nevertheless, Jha who is in the same field of endeavor of multi-access edge computing for roadside units discloses, cryptographically signed SAE J2735 standard Basic Safety Messages (BSM) (0006, an RSU transmits and receive messages as defined in Society of Automotive Engineers (SAE) Standard J2735. In the case of the Situation Data Clearinghouse and Situation Data Warehouse), wherein for the determining run-time ground truth perception, the one or more processors are further programmed and configured to: extract object-actor associated data in the received SCMS cryptographically signed SAE J2735 participant broadcasted Basic Safety Messages/Cooperative Awareness Messages (0006, this allows devices to take advantage of services such as the United States Department of Transportation (USDOT) Situation Data Clearinghouse, Situation Data Warehouse, and Security Credential Management System (SCMS) as well as other public and private network services. Secondly, an RSU transmits and receive messages as defined in Society of Automotive Engineers (SAE) Standard J2735), from which an object-actor's pose is determined, wherein the received data includes but is not limited to a reported object-actor's real-time kinematic GNSS corrected position, (0014, the NTCIP protocol is extended: (i) to define the secure transmission of lane-centric, 3-dimensional road information, traffic statistics, weather conditions, GNSS coordinates, and (ii) to facilitate an RSU to communicate such information with the management center for such purposes). However, even the combination of Miucic and Jha does not disclose aspects of claim 9. Furthermore, ETSI (ETSI TS 103 324 V2.1.1) who is in the same field of endeavor of intelligent transport systems for vehicular communications discloses spatial dimensions, including a length, a width, and/or a height (7.1.8.4, Object dimensions and age, one or more of the components objectDimensionX, objectDimensionY, objectDimensionZ may be included for each object, indicating the size of the bounding box of the perceived object in each direction). However, even the combination of Miucic, Jha, and ETSI (ETSI TS 103 324 V2.1.1) does not disclose aspects of claim 9. Furthermore, ETIS (ETSI EN 302 637-2 V1.4.1) who is in the same field of endeavor of intelligent transport systems for vehicular communications discloses, the categorical-type classification of the object-actor (4.1 Background, the status information includes time, position, motion state, activated systems, etc. and the attribute information includes data about the dimensions, vehicle type and role in the road traffic, etc.), and wherein the BSM/CAM extracted data is transformed into a format to store and retrieve the represented object-actor as an artifact within a system relative coordinate reference map (4.4, upon receiving a CAM, the CA basic service makes the content of the CAM available to the ITS applications and/or to other facilities within the receiving ITS-S, such as a Local Dynamic Map (LDM)). However, even the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1), and ETSI (ETSI TS 103 324 V2.1.1) does not disclose aspects of claim 9. However, Katz who is in the same field of endeavor of using v2x and sensor data discloses, predicting object-actors presence and pose present within the fields of view of the one or more sensors of the system; wherein the predicted object-actor output is sent to the system relative coordinate reference map as predicted object-actor artifact (0078, processing unit 104 can use the transformation function, e.g., as described above, to create a virtual map from the user parameters calculated from inputs from the sensor 102, such as, location and/or pose in pixels/point cloud space, classification, speed, acceleration, bearing and past and predicted trajectory); when no sensor associated object-actor artifacts are identified as a match to a BSM/CAM artifact an unmatched BSM/CAM is the basis for constituting a ground truth perception object-actor, (0063, processing unit 104 may create a message for each non-connected road user detected in step 126. The message (e.g. BSM and/or CAM and/or PSM, in current standards) typically contains the calculated user parameters (e.g., location, speed, acceleration, bearing, classification, past and predicted trajectory, etc.) and can be broadcasted via the V2X communication module 103 modem at the required frequency (e.g. lOhz for vehicles, 2hz for pedestrians), to all connected road users in the vicinity of the site), the outcome of the matched artifacts and map update is run-time ground truth perception, (0068, a virtual map may be created (e.g., calculated) periodically (e.g., at a predetermined frequency). In some embodiments the virtual map may be a dynamic virtual map that is updated periodically), wherein the system relative coordinate reference map updates to reflect the outcome state for subsequent system runtime use and/or external transmission of ground truth data (0063, processing unit 104 may create a message for each non-connected road user detected in step 126. The message (e.g. BSM and/or CAM and/or PSM, in current standards) typically contains the calculated user parameters (e.g., location, speed, acceleration, bearing, classification, past and predicted trajectory, etc.)); and persist the matched object-actors artifacts as ground truth object-actor data artifacts in the system relative coordinate map which subsequently are logged with cross-matched and reconciled data associated with Basic Safety Messages/Cooperative Awareness Messages and road environment sensor-perception data (0067, the list of total road users is matched to a list of connected road-users and at least the locations of the connected and non-connected road users are determined. [0062] Processing unit 104 may create a virtual map using the determined locations of the connected and non-connected road users). However, even the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI TS 103 324 V2.1.1), and Katz does not disclose aspects of claim 9. Finally, Baek who is in the same field of endeavor of perception based on the fusion of vehicular wireless communications and automotive remote sensors discloses, determine the existence of two or more artifacts loaded in the system relative coordinate reference map that correspond to the same object-actor in the road environment (Abstract, a track-to-track fusion of high-level sensor data and vehicular wireless communication data was performed to accurately and reliably locate the remote target in the vehicle surroundings and predict the future trajectory), where a minimum of one of the two more artifacts is associated with BSM/CAM data upon which one or more algorithms and or machine learning models establishes the existence of a match between two or more object-actor representative artifacts (2.3. V2X Communications, the vehicle state information is broadcast and shared among the vehicles equipped with DSRC devices by exchanging the BSM, which is defined in the SAE J2735 message set dictionary [29]. The BSM contains data obtained from the vehicle CAN bus and the GNSS receiver), wherein a matched set of artifacts constitutes a ground truth object-actor in the system reference map (3.2. Data Fusion, each sensor system outputs one or more tracks based on the sensor measurements, and the state estimates from multiple sensor tracks are associated and combined with a track-to-track fusion algorithm). Regarding claim 10, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose the at least one non-transitory computer readable medium of claim 9, as discussed supra. Additionally, Katz discloses, one or more instructions that cause the at least one processor to determine run-time ground truth perception cause the at least one process to perform operations comprising: determine the ground truth velocity of an object-actor based on proceeding established run-time ground truth perception data in series, (0078, the map may also include information relating to parameters (such as location, pose, classification, speed, acceleration, bearing and past and predicted trajectory) of connected users, who are not within the FOV of sensor 102. This information will typically be received from the V2X communication module 103, whereas information relating to parameters of a connected user who is within the FOV of sensor 102, will include information from both sensor 102 and V2X communication module 103). Regarding claim 11, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose the at least one non-transitory computer readable medium of claim 9, as discussed supra. Additionally, Katz discloses, the one or more processors are further programmed or configured to: determine the ground truth acceleration of an object-actor based on proceeding established run-time ground truth perception data in series (0078, the map may also include information relating to parameters (such as location, pose, classification, speed, acceleration, bearing and past and predicted trajectory) of connected users, who are not within the FOV of sensor 102. This information will typically be received from the V2X communication module 103, whereas information relating to parameters of a connected user who is within the FOV of sensor 102, will include information from both sensor 102 and V2X communication module 103). Claims 4, 8, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Miucic et al. (US20230059897A1) in view of Jha et al. (US20220172609A1), further in view of Katz et al (US20220013008A1), further in view of ETSI (ETSI TS 103 324 V2.1.1), further in view of ETSI (ETSI EN 302 637-2 V1.4.1), further in view of Baek et al. (Driving Environment Perception Based on the Fusion of Vehicular Wireless Communications and Automotive Remote Sensors), further in view of Gaidon et al. (US20200134379A1). Regarding claim 4, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose, the system of claim 1, as discussed supra. Additionally, Gaidon who is in the same field of endeavor of the auto-labeling of driving logs discloses, the one or more processors are further programmed or configured to: construct labeled data for the post run training of machine learning models (0047, the labeled real-world driving logs can be used in connection with machine learning for training, validation, evaluation, and/or model management purposes), wherein the run-time determined ground truth perception data and the raw data from the system's one or more sensors are logged and timestamped (0034, the driving logs and their automatically generated labels can be stored and indexed using standard database techniques in one or data stores for future retrieval and training purposes), wherein the logged and timestamped ground truth perception data generated at run-time comprises labeled features associated to the logged sensor data (10, the one or more labeled real-world driving logs include raw measurements captured by the one or more vehicle sensors and from ground truth labels that have been automatically labeled by the simulation-to-real automatic labeling). It would be obvious to one skilled in the art to combine the combination of Miucic, Jha, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI TS 103 324 V2.1.1), Katz, and Baek with Gaidon, who teaches labeled datasets by logging raw sensor data with ground truth labels for machine learning training. This would persist the fused runtime “ground truth” object states from Baek and Miucic as the labels and store them alongside the simultaneously captured raw sensor frames to improve model performance through offline retraining. Regarding claim 8, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose, the method of claim 5, as discussed supra. Additionally, Gaidon who is in the same field of endeavor of the auto-labeling of driving logs discloses, the one or more processors are further programmed or configured to: construct labeled data for the post run training of machine learning models (0047, the labeled real-world driving logs can be used in connection with machine learning for training, validation, evaluation, and/or model management purposes), wherein the run-time determined ground truth perception data and the raw data from the system's one or more sensors are logged and timestamped (0034, the driving logs and their automatically generated labels can be stored and indexed using standard database techniques in one or data stores for future retrieval and training purposes), wherein the logged and timestamped ground truth perception data generated at run-time comprises labeled features associated to the logged sensor data (10, the one or more labeled real-world driving logs include raw measurements captured by the one or more vehicle sensors and from ground truth labels that have been automatically labeled by the simulation-to-real automatic labeling). Regarding claim 12, Miucic, Jha, Katz, ETSI (ETSI TS 103 324 V2.1.1), ETSI (ETSI EN 302 637-2 V1.4.1), and Baek disclose, the at least one non-transitory computer readable medium of claim 9, as discussed supra. Additionally, Gaidon who is in the same field of endeavor of the auto-labeling of driving logs discloses, instructions that cause the at least one processor to determine run-time ground truth perception cause the at least one process to perform operations comprising: constructing labeled data for
Read full office action

Prosecution Timeline

Mar 21, 2024
Application Filed
Oct 28, 2024
Response after Non-Final Action
Nov 08, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592101
INFORMATION COMMUNICATION DEVICE OF VEHICLE, INFORMATION MANAGEMENT SERVER, AND INFORMATION COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
17%
Grant Probability
39%
With Interview (+22.2%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month