Prosecution Insights
Last updated: April 19, 2026
Application No. 18/358,366

SPATIAL AWARENESS VIA GAP FILLING

Non-Final OA §103
Filed
Jul 25, 2023
Examiner
DOROS, KAYLA RENEE
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
76%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
19 granted / 26 resolved
+21.1% vs TC avg
Minimal +3% lift
Without
With
+2.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
30 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
53.7%
+13.7% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This non-office action is a response to the request for continued examination (RCE) received on 11/05/2025. Claims 1-30 are pending. Independent Claims 1, 15, 24, and 29 have been amended. Information Disclosure Statement The information disclosure statement (IDS) received on 11/05/2025 has been annotated and considered. Response to Arguments Applicant’s amendments overcome the previous 112(b) rejection. Regarding the prior arguments directed towards the prior art reference, Chaton, on page 11 of the remarks, the arguments have been considered but are not persuasive. Although Chaton does disclose an example scenario in ¶0086 regarding a situation where a pedestrian/vulnerable road user (VRU) does not have a data connection to the computing apparatus, this is merely an illustrative example that illustrates that the system of Chaton can operate without relying on the contextual data from the user equipment (UE) of a VRU. Chaton does not disclose that the VRU user equipment is excluded from the network, or that it can not be used. This does not prove that the consideration of data from the VRU user equipment would not be an obvious addition. Instead, Chaton discloses a system that accepts and fuses environmental perception data from multiple different sources (vehicles, smart objects, etc). Furthermore, in light of the new grounds of rejection via the prior art reference Jha et. al. (See 103 rejection below), the additional consideration of VRU user equipment communicated information is an obvious supplement to the system in order to achieve the same goal of increasing safety and avoiding collision. Thus, the arguments with respect to Claims 1-30 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for teaching or matter specifically challenged in the argument. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Such claim limitations are as follows in Claim 24: “means for receiving first contextual information from a plurality of OBUs…” Structure and support are found in applicant’s specification in at least ¶0083. “means for generating a gap-filling message customized for a given OBU…” Structure and support are found in applicant’s specification in at least ¶0086. “means for sending the gap-filling message to the given OBU…” Structure and support are found in applicant’s specification in at least ¶0091. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-7, 9-13 15-17, 19, and 21-30 are rejected under 35 U.S.C. 103 as being unpatentable over Chaton (EP 3761285 A1, IDS) in view of Oshida et. al. (US 20160071418 A1) and Jha et. al. (US 20230110467 A1). Regarding Claim 1, Chaton discloses: A method of providing spatial awareness to an on-board unit (OBU) of a vehicle, the method comprising: (See at least Figure 1 wherein S101 and S102 gather data from OBUs of a first and second smart object--which can be vehicles as disclosed in at least ¶0052 , S103 combines/merges the data, and S104 transmits an excerpt/message providing spatial awareness information to a vehicle/OBU. Also see at least Figure 2 via Vehicle 2 and Vehicle Control Device 20 which is an OBU/see at least ¶0051) receiving first contextual information from a plurality of OBUs (See at least Figure 1 via S101 and S102: "receive first sensor data" and "receive second sensor data", wherein the sensor data is received by the computing apparatus 10 which can be a server--See at least ¶0045. Additionally see at least ¶0052 via "The subject vehicle 2 is a first sensing object, and may be a member of a network that also includes the computing apparatus 10. The subject vehicle 2 is exemplary of a Smart Connected Thing (SCoT), or smart object, in the network. The second sensing object or objects may also be exemplary of Smart Connecting Things (SCoTs), or smart objects, in the network." and also ¶0060 via "The first sensor data is data from the sensors provided by the subject vehicle, and the second sensor data is data from sensors provided by objects other than the subject vehicle (e.g. other vehicles, or stationary sensors)". Also see at least Figures 3/5 which illustrate a plurality of smart objects 7.) the first contextual information comprising optically sensed information, spatially sensed information, or a combination thereof obtained by the plurality of OBUs (See at least ¶0051 via "One or more sensors 28 may be mounted on the vehicle 20, providing realtime sensor data representing location information of objects within the physical domain 200 to the computing apparatus 10" and see at least Figure 4 which illustrates vehicle control device 20 and ¶0067: "The sensors 28 may include one or more from among: a LIDAR 281, a RADAR 282, camera 283, and any other sensor providing information indicating the location and/or position and/or velocity and/or acceleration of the vehicle 2 or any other object within the physical domain 200") generating a gap-filling message customized for a given OBU of the plurality of OBUs based on a set of contextual information derived from the received first contextual information, (See at least ¶0060 via "At S104, an excerpt of the augmented dataset is transmitted to the vehicle 2. The augmented dataset is filtered or triaged to select an excerpt that is relevant to the subject vehicle, and that excerpt is transmitted via the above-described data communication link between the vehicle 2 and the computing apparatus 10 executing the method. Transmitting an excerpt prevents transmission bandwidth being wasted with irrelevant data.") the set of contextual information comprising a union of (i) the first contextual information obtained by the plurality of OBUs (See at least Figure 1 via S103 and also see at least ¶0057 via "For example, S103 may comprise merging the first sensor data and the second sensor data to generate a augmented dataset representing location information of a union of the first set of objects and the second set of objects." *Wherein the first contextual information is the first sensor data and the second sensor data, and the second contextual data is the first sensor data sensed by the given Vehicle 2.) such that the first contextual information obtained by the plurality of OBUs and the second contextual information known to the given OBU do not overlap in the set of contextual information; and (See at least Figure 1 via S103 and ¶0057 via "The merging S103 may include reconciling the first set of objects and the second set of objects so that representations of location information of the same object in both of the datasets can be identified and, for example, associated with one another for further processing (i.e. comparison, duplicate removal). The merging S103 may include redundancy removal. The result of the merging S103 is a augmented dataset comprising the first sensor data merged with the second sensor data.") sending the gap-filling message to the given OBU, wherein the gap-filling message provides (See at least Figure 1 via S104 and also at least ¶0060) PNG media_image1.png 297 486 media_image1.png Greyscale However, Chaton does not explicitly disclose the optically sensed information including at least optical image data, and the sending of the optical image data to the OBU that is unavailable to the given OBU. Nevertheless, Oshida--who is directed towards vehicle control--discloses: the optically sensed information including at least optical image data; (See at least ¶0039 via "The sensor component 130 may include one or more radar units, image capture components, sensors, cameras, gyroscopes, accelerometers, scanners (e.g., 2-D scanners or 3-D scanners), or other measurement components." and ¶0012 via " In this way, an assist component 180 may utilize V2V communications or other communications, such as cloud communications to receive information or to detect and inform drivers of objects or obstacles in the roadway, or react accordingly, such as by providing a visual notification or a real time image of the object, including lane level detail location information, or by performing an automatic lane change.") wherein the gap-filling message provides optically sensed information, (See at least Figure 5 which illustrates Vehicle 590 detecting / imaging a hazard that is not in the line of sight of Vehicle 500A. Also see ¶0137 via " Upon detection of an object or hazard 510, the first vehicle 590 may capture an image of the hazard 510 and determine a lane level location for the hazard 510. In this example, the hazard 510 is associated with lane A. The first vehicle 590 may transmit an image of the hazard 510 and lane level information associated with the hazard 510." as well as ¶0138 via " In other embodiments, the first vehicle 590 may transmit the image or information directly to a second vehicle 500A." Additionally, see Figure 6). PNG media_image2.png 393 796 media_image2.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Chaton in view of Oshida in order to determine if corrective action is needed: "In any event, after the image or location information of the reported hazard is received, the second vehicle may perform lane level matching to determine a current lane location or lane position for the second vehicle. If this lane level matching determined that the second vehicle is in the same lane as the reported obstacle, corrective action may be taken 650 based on the lane position of the second vehicle, one or more available lanes, a current traffic situation, other detected obstacles, a contour of the roadway, etc." [Oshida ¶0143] which would allow for better hazard/collision avoidance as the vehicle can sooner obtain the information that would otherwise be unavailable. However, modified Chaton does not explicitly disclose the receiving of contextual information from an UE associated with a VRU. Nevertheless, Jha--who is directed towards a collective perception service--discloses: receiving first contextual information from (See at least Figure 12 which depicts VRU 1216 carrying respective UE 1210v, and wireless communication links such as 1220v which shows the communication link between the UE and the network access node (NAN) 1230, which illustrates the transmission from the VRU/UE to the network. Furthermore, see at least ¶0064 which describes that the location/spatial information (contextual information) is shared: "The information regarding the location and dynamic state of the perceived object is provided in a coordinate system that is used for the description of the object's state variables in case of a vehicle sharing information about a detected object (see e.g., [ISO8855] and [TS103324]). In case an R-ITS-S 1230 is disseminating the CPM 100, the reference position refers to the reference position as defined in [CEN-ISO/TS19091] and/or [TS103324] (e.g., an arbitrary point on the intersection)." as well as ¶0076 which describes the information from the VRU/UE via "The object class “groupSubClass” is used to report a VRU group or cluster. A VRU group contains a set of VRUs (e.g., VRU 1216, 1210v) perceived by the ITS-S generating the CPM 100") PNG media_image3.png 638 820 media_image3.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Modified Chaton's method of providing spatial awareness to an on-board unit of a vehicle in view of incorporating contextual data that is received from a vulnerable road user/user equipment in order to support and supplement Modified Chaton's central computing system that merges information from multiple data sources (vehicles, smart connected things, etc) by providing an additional source of information to yield improved environmental perception and increased precision when navigating on the road: "This allows CPS-enabled ITS-Ss to enhance their environmental perception not only regarding non-V2X-equipped road users and drivable regions, but also increasing the number of information sources for V2X-equipped road users. A higher number of independent sources generally increases trust and leads to a higher precision of the environmental perception" [Jha ¶0023], as well to further improve the collision avoidance: "by comparing the status of the detected road user or received object information, the receiving ITS-S sub-system is able to estimate the collision risk with such a road user or object and may inform the user via the HMI of the receiving ITS sub-system or take corrective actions automatically" [Jha ¶0026]. Furthermore, although Chaton discloses that the system still operates in an example where VRU's may not have connectivity, this is merely an illustrative example and does not prohibit the inclusion of VRU/UEs. Chaton's disclosure emphasizes receiving the contextual information from a plurality of smart objects, which is further why it would be obvious to utilize the additional information when available (in cases where the VRU/UE can be connected) in order to supplement the perception when addressing blind spots in the environment. Regarding Claim 15, Chaton discloses: An apparatus comprising: one or more data communication interfaces; one or more memory; and one or more processors communicatively coupled to the one or more data communication interfaces and the one or more memory, the one or more processors configured to: (See at least Figure 12 via "computing apparatus". Also see at least ¶0098 via "The computing apparatus comprises a processor 993, and memory, 994. The computing apparatus also includes a network interface 997 for communication with other computing apparatus, for example with vehicle control devices 20 of embodiments. The computing apparatus 10 may be a computing device of the form shown in Figure 12") (Regarding the instructions: see Claim 1 rejection because the steps are the same) Regarding Claim 24, Chaton discloses: An apparatus comprising: (See at least Figure 12 via "computing apparatus") means for receiving first contextual information from a plurality of OBUs and (See at least Figure 12 via "computing apparatus". Also see at least ¶0098 via "The computing apparatus comprises a processor 993, and memory, 994. The computing apparatus also includes a network interface 997 for communication with other computing apparatus, for example with vehicle control devices 20 of embodiments." *Wherein the contextual information is received by the computing apparatus, which can be a server as disclosed in at least ¶0098. Also see at least ¶0052 via "The subject vehicle 2 is a first sensing object, and may be a member of a network that also includes the computing apparatus 10. The subject vehicle 2 is exemplary of a Smart Connected Thing (SCoT), or smart object, in the network. The second sensing object or objects may also be exemplary of Smart Connecting Things (SCoTs), or smart objects, in the network." and also ¶0060 via "The first sensor data is data from the sensors provided by the subject vehicle, and the second sensor data is data from sensors provided by objects other than the subject vehicle (e.g. other vehicles, or stationary sensors)". Also see at least Figures 3/5 which illustrate a plurality of smart objects 7) the first contextual information comprising optically sensed information, spatially sensed information, or a combination thereof obtained by the plurality of OBUs (See at least ¶0051 via "One or more sensors 28 may be mounted on the vehicle 20, providing realtime sensor data representing location information of objects within the physical domain 200 to the computing apparatus 10" and see at least Figure 4 which illustrates vehicle control device 20 and ¶0067: "The sensors 28 may include one or more from among: a LIDAR 281, a RADAR 282, camera 283, and any other sensor providing information indicating the location and/or position and/or velocity and/or acceleration of the vehicle 2 or any other object within the physical domain 200") means for generating a gap-filling message customized for a given OBU of the plurality of OBUs based on a set of contextual information derived from the received first contextual information, (See at least Figure 12 via "computing apparatus". Also see at least ¶0098 via "The computing apparatus comprises a processor 993, and memory, 994. The computing apparatus also includes a network interface 997 for communication with other computing apparatus, for example with vehicle control devices 20 of embodiments." *Wherein the gap-filling message is generated in at least Figure 1 S104 by the computing apparatus, which can be a server as disclosed in at least ¶0098. Additionally see at least ¶0060 via "At S104, an excerpt of the augmented dataset is transmitted to the vehicle 2. The augmented dataset is filtered or triaged to select an excerpt that is relevant to the subject vehicle, and that excerpt is transmitted via the above-described data communication link between the vehicle 2 and the computing apparatus 10 executing the method. Transmitting an excerpt prevents transmission bandwidth being wasted with irrelevant data.") the set of contextual information comprising a union of (i) the first contextual information obtained by the plurality of OBUs , (See at least Figure 1 via S103 and also see at least ¶0057 via "For example, S103 may comprise merging the first sensor data and the second sensor data to generate a augmented dataset representing location information of a union of the first set of objects and the second set of objects." *Wherein the first contextual information is the first sensor data and the second sensor data, and the second contextual data is the first sensor data sensed by the given Vehicle 2.) such that the first contextual information obtained by the plurality of OBUs and the second contextual information known to the given OBU do not overlap in the set of contextual information; and (See at least Figure 1 via S103 and ¶0057 via "The merging S103 may include reconciling the first set of objects and the second set of objects so that representations of location information of the same object in both of the datasets can be identified and, for example, associated with one another for further processing (i.e. comparison, duplicate removal). The merging S103 may include redundancy removal. The result of the merging S103 is a augmented dataset comprising the first sensor data merged with the second sensor data." See also ¶0077 where gaps such as blind spots are identified from real time sensor data from vehicle 2 and the data gap filled with sensors from other vehicles. ) means for sending the gap-filling message to the given OBU wherein the gap-filling message provides(See at least Figure 12 via "computing apparatus". Also see at least ¶0098 via "The computing apparatus comprises a processor 993, and memory, 994. The computing apparatus also includes a network interface 997 for communication with other computing apparatus, for example with vehicle control devices 20 of embodiments." *Wherein the gap-filling message is sent by the computing apparatus via the network interface. Additionally see at least Figure 1 via S104 and also at least ¶0060 via "At S104, an excerpt of the augmented dataset is transmitted to the vehicle 2. The augmented dataset is filtered or triaged to select an excerpt that is relevant to the subject vehicle, and that excerpt is transmitted via the above-described data communication link between the vehicle 2 and the computing apparatus 10 executing the method. Transmitting an excerpt prevents transmission bandwidth being wasted with irrelevant data."). However, Chaton does not explicitly disclose the optically sensed information including at least optical image data, and the sending of the optical image data to the OBU that is unavailable to the given OBU. Nevertheless, Oshida--who is directed towards vehicle control--discloses: the optically sensed information including at least optical image data (See at least ¶0039 via "The sensor component 130 may include one or more radar units, image capture components, sensors, cameras, gyroscopes, accelerometers, scanners (e.g., 2-D scanners or 3-D scanners), or other measurement components." and ¶0012 via " In this way, an assist component 180 may utilize V2V communications or other communications, such as cloud communications to receive information or to detect and inform drivers of objects or obstacles in the roadway, or react accordingly, such as by providing a visual notification or a real time image of the object, including lane level detail location information, or by performing an automatic lane change.") wherein the gap-filling message provides optically sensed information, (See at least Figure 5 which illustrates Vehicle 590 detecting / imaging a hazard that is not in the line of sight of Vehicle 500A. Also see ¶0137 via " Upon detection of an object or hazard 510, the first vehicle 590 may capture an image of the hazard 510 and determine a lane level location for the hazard 510. In this example, the hazard 510 is associated with lane A. The first vehicle 590 may transmit an image of the hazard 510 and lane level information associated with the hazard 510." as well as ¶0138 via " In other embodiments, the first vehicle 590 may transmit the image or information directly to a second vehicle 500A." Additionally, see Figure 6). PNG media_image4.png 308 624 media_image4.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Chaton in view of Oshida in order to determine if corrective action is needed: "In any event, after the image or location information of the reported hazard is received, the second vehicle may perform lane level matching to determine a current lane location or lane position for the second vehicle. If this lane level matching determined that the second vehicle is in the same lane as the reported obstacle, corrective action may be taken 650 based on the lane position of the second vehicle, one or more available lanes, a current traffic situation, other detected obstacles, a contour of the roadway, etc." [Oshida ¶0143] which would allow for better hazard/collision avoidance as the vehicle can sooner obtain the information that would otherwise be unavailable. However, modified Chaton does not explicitly disclose the receiving of contextual information from an UE associated with a VRU. Nevertheless, Jha--who is directed towards a collective perception service--discloses: receiving first contextual information from (See at least Figure 12 which depicts VRU 1216 carrying respective UE 1210v, and wireless communication links such as 1220v which shows the communication link between the UE and the network access node (NAN) 1230, which illustrates the transmission from the VRU/UE to the network. Furthermore, see at least ¶0064 which describes that the location/spatial information (contextual information) is shared: "The information regarding the location and dynamic state of the perceived object is provided in a coordinate system that is used for the description of the object's state variables in case of a vehicle sharing information about a detected object (see e.g., [ISO8855] and [TS103324]). In case an R-ITS-S 1230 is disseminating the CPM 100, the reference position refers to the reference position as defined in [CEN-ISO/TS19091] and/or [TS103324] (e.g., an arbitrary point on the intersection)." as well as ¶0076 which describes the information from the VRU/UE via "The object class “groupSubClass” is used to report a VRU group or cluster. A VRU group contains a set of VRUs (e.g., VRU 1216, 1210v) perceived by the ITS-S generating the CPM 100") PNG media_image3.png 638 820 media_image3.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Modified Chaton's apparatus that provides spatial awareness to an on-board unit of a vehicle, in view of incorporating contextual data that is received from a vulnerable road user/user equipment in order to support and supplement Modified Chaton's central computing system that merges information from multiple data sources (vehicles, smart connected things, etc) by providing an additional source of information to yield improved environmental perception and increased precision when navigating on the road: "This allows CPS-enabled ITS-Ss to enhance their environmental perception not only regarding non-V2X-equipped road users and drivable regions, but also increasing the number of information sources for V2X-equipped road users. A higher number of independent sources generally increases trust and leads to a higher precision of the environmental perception" [Jha ¶0023], as well to further improve the collision avoidance: "by comparing the status of the detected road user or received object information, the receiving ITS-S sub-system is able to estimate the collision risk with such a road user or object and may inform the user via the HMI of the receiving ITS sub-system or take corrective actions automatically" [Jha ¶0026]. Furthermore, although Chaton discloses that the system still operates in an example where VRU's may not have connectivity, this is merely an illustrative example and does not prohibit the inclusion of VRU/UEs. Chaton's disclosure emphasizes receiving the contextual information from a plurality of smart objects, which is further why it would be obvious to utilize the additional information when available (in cases where the VRU/UE can be connected) in order to supplement the perception when addressing blind spots in the environment. Regarding Claim 29, Chaton discloses: A non-transitory computer-readable apparatus comprising a storage medium, the storage medium comprising a plurality of instructions configured to, when executed by one or more processors, cause an apparatus to: (See at least Figure 12 and also ¶0100 via "The memory 994 may include a computer readable medium, which term may refer to a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) configured to carry computer-executable instructions or have data structures stored thereon.") (Regarding the instructions: see Claim 1 rejection because the steps are the same) Regarding Claims 2 and 26 respectively: Modified Chaton discloses the method of Claim 1 and the apparatus of Claim 24. Furthermore, Chaton discloses: wherein the gap-filling message comprises at least a portion of a difference between the first contextual information from the plurality of OBUs and the second contextual information known to the given OBU (See at least ¶0055 via "The second set of objects may be partially overlapped by the first set of objects (in terms of membership), and partially exclusive thereof: in this case, the second sensor data may augment the first sensor data by providing location information about an object about which the vehicle is not already aware". Also see at least ¶0086-¶0087 and Figure 7.). Regarding Claim 3, Modified Chaton discloses the method of Claim 2 Furthermore, Chaton discloses: wherein the at least the portion of the difference is representative of occlusion information relating to at least one object that is not in a field of vision of a camera of the vehicle (See at least ¶0061 via "Step S103" and "Blind spot completion". Additionally see at least ¶0077 via "The triangulator 66 operates in collaboration with the collective knowledge optimizer 62 to identify gaps, for example, blind spots, in realtime sensor data from a vehicle 2, and to fill those gaps with data from sensors such as stationary sensors or sensors from other vehicles, the data filling the gap may be communicated to the vehicle 2 (or its vehicle computing device 20).". Also see at least ¶0086-¶0087 and Figure 7.). Regarding Claims 16, 25, and 30 respectively: Modified Chaton discloses the apparatus of Claim 15, the apparatus of Claim 24, and the non-transitory computer-readable apparatus of Claim 29. Furthermore, Chaton discloses: wherein the gap-filling message comprises at least a portion of a difference between the first contextual information from the plurality of OBUs and the second contextual information known to the given OBU, (See at least ¶0055 via "The second set of objects may be partially overlapped by the first set of objects (in terms of membership), and partially exclusive thereof: in this case, the second sensor data may augment the first sensor data by providing location information about an object about which the vehicle is not already aware". Also see at least ¶0086-¶0087 and Figure 7.) and the at least the portion of the difference is representative of occlusion information relating to at least one object that is not in a field of vision of a camera associated with the given OBU (See at least ¶0061 via "Step S103" and "Blind spot completion". Additionally see at least ¶0077 via "The triangulator 66 operates in collaboration with the collective knowledge optimizer 62 to identify gaps, for example, blind spots, in realtime sensor data from a vehicle 2, and to fill those gaps with data from sensors such as stationary sensors or sensors from other vehicles, the data filling the gap may be communicated to the vehicle 2 (or its vehicle computing device 20).". Also see at least ¶0086-¶0087 and Figure 7.). Regarding Claims 4 and 17 respectively: Modified Chaton discloses the method of Claim 1 and the apparatus of Claim 15. Furthermore, Chaton discloses: wherein: receiving the first contextual information from the plurality of OBUs comprises receiving the first contextual information from the plurality of OBUs located within a region; and (See at least ¶0086-¶0088 and Figure 7-8 which illustrate a spatial region--an intersection--with a pedestrian 4 and a plurality of OBUs of vehicles 2, 3, and 1. For example, in ¶0087: "Vehicles 1 and 2 are connected to the computing apparatus 10 via a wireless data communication. Vehicle 1 will share its knowledge of the existence of objects 3 & 4 (including one or more from among classification, position, trajectory, etc.) while vehicle 2 shares its knowledge of object 3 only, because object 4 is in a blind spot from the perspective of vehicle 2." *Wherein the contextual data is received from the plurality of OBUs within a region) generating one or more region-specific gap-filling messages based on a region-specific set of contextual information associated with the plurality of OBUs located within the region, (See at least ¶0086-¶0088. Specifically see ¶0087 via "This augmented knowledge is analysed by the computing apparatus 10 (for example, by the 10 collective knowledge optimiser 62) to identify the blind spot in the vehicle 2 vision field caused by object 3, and will dispatch the relevant excerpt of augmented data to vehicle 2 immediately, hence avoiding a collision with the pedestrian." *Wherein the relevant excerpt dispatched to vehicle 2 is the gap-filling message that is specific to the region the OBUs are located in--intersection--based on information gathered from the plurality of OBUs in the region) the region-specific set of contextual information being at least a subset of the set of contextual information (See at least ¶0086-¶0088 and specifically at least ¶0087 via "dispatch the relevant excerpt of augmented data to vehicle 2 immediately" *Wherein the relevant excerpt is a subset of the overall contextual information received by the computing apparatus 10. Furthermore see at least ¶0060 via "The augmented dataset is filtered or triaged to select an excerpt that is relevant to the subject vehicle, and that excerpt is transmitted via the above-described data communication link between the vehicle 2 and the computing apparatus 10 executing the method. Transmitting an excerpt prevents transmission bandwidth being wasted with irrelevant data" *which illustrates why the vehicle is only sent a portion of the overall contextual data received by the computing apparatus from the plurality of OBU's--the data is sent to fill the gaps of information which the given vehicle lacks, while avoiding redundant information). PNG media_image5.png 342 419 media_image5.png Greyscale Regarding Claim 6, Modified Chaton discloses the method of Claim 1. Furthermore, Chaton discloses: wherein the spatially sensed information is indicative of a location of one or more objects within an environment of the given OBU based on radio frequency (RF) sensing (See at least ¶0067 via "The sensors 28 may include one or more from among: a LIDAR 281, a RADAR 282, camera 283, and any other sensor providing information indicating the location and/or position and/or velocity and/or acceleration of the vehicle 2 or any other object within the physical domain 200"). Regarding Claims 7, 19, and 27 respectively: Modified Chaton discloses the method of Claim 1, the apparatus of Claim 15, and the apparatus of Claim 24. Furthermore, Chaton discloses: sensed information associated with one or more objects within an environment of the given OBU, (See at least Figure 5 and at least ¶0071 via "The vehicle control device 20, for example, at the object contact predictor 221, is configured to use the excerpt of augmented data to identify any object having a location or predicted path contempora-neously within a defined contact threshold distance of the predicted path of the vehicle 2." and also at least ¶0069 via "The sensors 281 to 284, and the associated processing functions 221 to 226, analyse the measured sensor data to infer motion of dynamic physical objects (via the motion path identifier 225), location of physical objects (via the location identifier), and classification of objects (via the object recognition function 225), and to use the inferred information in the controller 226.") location information corresponding to one or more of the plurality of OBUs, (See at least ¶0063 via "The location of the vehicle 2 may be metadata attached to the first sensor data") object information associated with one or more objects within an environment associated with one or more of the plurality of OBUs, (See at least Figure 5 and at least ¶0071 via "The vehicle control device 20, for example, at the object contact predictor 221, is configured to use the excerpt of augmented data to identify any object having a location or predicted path contempora-neously within a defined contact threshold distance of the predicted path of the vehicle 2." *Wherein the objects include other smart objects and are thus associated with one or more of the plurality of OBUs) occlusion information associated with one or more cameras of one or more vehicles, (See at least ¶0086-¶0088 and Figure 7-8 which illustrate a spatial region with a pedestrian 4 and a plurality of OBUs of vehicles 2, 3, and 1. Specifically in ¶0087: "Vehicles 1 and 2 are connected to the computing apparatus 10 via a wireless data communication. Vehicle 1 will share its knowledge of the existence of objects 3 & 4 (including one or more from among classification, position, trajectory, etc.) while vehicle 2 shares its knowledge of object 3 only, because object 4 is in a blind spot from the perspective of vehicle 2." *Wherein Figure 7-8 illustrate vehicle 2's occluded field of view) direction information associated with one or more of the plurality of OBUs, (See at least ¶0056 via "The location information that is represented in the first and second sensor data may be one or more from among: object location; object speed; object velocity; object direction; object acceleration" *Wherein the object includes other smart objects/vehicles that include OBUs) capability information associated with one or more of the plurality of OBUs, or a combination thereof (See at least ¶0056 via "The location information that is represented in the first and second sensor data may be one or more from among: object location; object speed; object velocity; object direction; object acceleration. Each represented property may be accompanied by an indication of confidence or accuracy." *Wherein a confidence level is a measure of a sensors capability to accurately sense an object). PNG media_image6.png 642 934 media_image6.png Greyscale Regarding Claim 9, 21, and 28 respectively: Modified Chaton discloses the method of Claim 1, the apparatus of Claim 15, and the apparatus of Claim 24. Furthermore, Chaton discloses: wherein the first contextual information comprises a Basic Safety Message (BSM), a Personal Safety Message (PSM), a Collective Perception Message (CPM), a Sensor Data Sharing Message (SDSM), or a combination thereof (Per applicant's specification ¶0037: "a BSM may include information about vehicle status, such as speed, position, steering wheel angle, acceleration, heading (direction), path history, and/or vehicle type" and also per applicants specification ¶0037: "A CPM may include information about the OBU (e.g., position, heading), information about the vehicle (e.g., sensor information), and information about perceived objects (e.g., position, speed, dimensions)" || Furthermore, Chaton discloses in at least ¶0083 via "Each connected sensing object provides, (potentially in real time), accurate information about its current status and a complete list of its surrounding detected objects with their estimated location (GPS or angle/distance), estimated relative speed and acceleration along with confidence and/or accuracy levels. In addition, it may provide other meta data such as classification of detected objects (e.g., vehicle, person, street furniture, etc.), their size (3D), description (e.g., truck, car, motorbike, child, senior citizen, traffic light, road sign, etc.), detailed content (model of car, school child, baby stroller, speed limit road sign, etc.)". Additionally, Chaton discloses the SDSM in at least ¶0079: "Steps S600 to S602 are performed at the subject vehicle 2 (i.e. the autonomous vehicle whose sensor data is augmented). At S600, sensor data is acquired from the surrounding environment by vehicle-mounted sensors as first sensor data."). Regarding Claim 10 and 22 respectively: Modified Chaton discloses the method of Claim 1 and the apparatus of Claim 15. Furthermore, Chaton discloses: further comprising determining, based on at least a portion of the received first contextual information, a visual occlusion associated with the given OBU; wherein the gap-filling message sent to the given OBU comprises information that compensates for the visual occlusion associated with the given OBU (See at least ¶0086-¶0088 and Figure 7-8 which illustrate a spatial region with a pedestrian 4 and a plurality of OBUs of vehicles 2, 3, and 1. For example, in ¶0087: "Vehicles 1 and 2 are connected to the computing apparatus 10 via a wireless data communication. Vehicle 1 will share its knowledge of the existence of objects 3 & 4 (including one or more from among classification, position, trajectory, etc.) while vehicle 2 shares its knowledge of object 3 only, because object 4 is in a blind spot from the perspective of vehicle 2. This augmented knowledge is analysed by the computing apparatus 10 (for example, by the 10 collective knowledge optimiser 62) to identify the blind spot in the vehicle 2 vision field caused by object 3, and will dispatch the relevant excerpt of augmented data to vehicle 2 immediately, hence avoiding a collision with the pedestrian." *Wherein the excerpt compensates for the visual occlusion as it allows the given vehicle 2 to maneuver with knowledge of the pedestrian 4, thus reducing the risk of collision). PNG media_image5.png 342 419 media_image5.png Greyscale Regarding Claims 11 and 23 respectively: Modified Chaton discloses the method of Claim 1 and the apparatus of Claim 15. Furthermore, Chaton discloses: further comprising generating a map of an environment associated with the plurality of OBUs based on the gap-filling message (See at least ¶0065 via "The augmented dataset is stored on a memory 14. The augmented dataset may be in the form of, for example, a dynamic map. The data triage 64 is configured to perform the composition and 40 transmission of the excerpt of augmented data in S 104." and also at least ¶0077 via "The dynamic map 68 may use the location information of sensed objects, along with predicted paths for those objects, to maintain a dynamic 4-dimensional map which projects motion of objects into the future based on their sensed locations and predicted motion paths."). Regarding Claim 12, Modified Chaton discloses the method of Claim 11. Furthermore, Chaton discloses: further comprising: receiving subsequent contextual information from at least one of the plurality of OBUs; and updating the map of the environment associated with the plurality of OBUs based on the subsequent contextual information (See at least ¶0077 via "The augmented data is stored, for example, in the form of a dynamic map 68. The dynamic map 68 may use the location information of sensed objects, along with predicted paths for those objects, to maintain a dynamic 4-dimensional map which projects motion of objects into the future based on their sensed locations and predicted motion paths." *Wherein the subsequent contextual information is the information of sensed objects, and the updating of the map is the maintaining of the dynamic 4-dimensional map). Regarding Claim 13, Modified Chaton discloses the method of Claim 1. However, although Chaton discloses Figure 12 which has a display 995 included in the computing apparatus (which could be the vehicle control device: ¶0098 via "The vehicle control device 20 may be a computing device of the form shown in Figure 12"), Chaton does not explicitly disclose sending visual information to the OBU. Nevertheless, Oshida discloses: further comprising sending, to the given OBU, visual information configured to enable display of an indication of location information corresponding to the given OBU, an indication of visual occlusion associated with the given OBU, or a combination thereof (See at least Figure 6 and also ¶0137 via " Upon detection of an object or hazard 510, the first vehicle 590 may capture an image of the hazard 510 and determine a lane level location for the hazard 510. In this example, the hazard 510 is associated with lane A. The first vehicle 590 may transmit an image of the hazard 510 and lane level information associated with the hazard 510." as well as ¶0138 via " In other embodiments, the first vehicle 590 may transmit the image or information directly to a second vehicle 500A." Additionally, see ¶0127 via "In one or more embodiments, vehicle components, such as the interface component 170 or the operation component 120 may be utilized to provide a notification to the driver of the second vehicle, such as by illuminating LEDs while rendering an image of the hazard on the display portion of the interface component 170 of the second vehicle"). Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Chaton in view of Oshida's displaying of visual data in order to help determine if corrective action is needed: "In any event, after the image or location information of the reported hazard is received, the second vehicle may perform lane level matching to determine a current lane location or lane position for the second vehicle. If this lane level matching determined that the second vehicle is in the same lane as the reported obstacle, corrective action may be taken 650 based on the lane position of the second vehicle, one or more available lanes, a current traffic situation, other detected obstacles, a contour of the roadway, etc." [Oshida ¶0143] which would allow for better hazard/collision avoidance as the vehicle or driver can sooner obtain the information that would otherwise be unavailable. Claims 5 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chaton (EP 3761285 A1, IDS), Oshida et. al. (US 20160071418 A1), and Jha et. al. (US 20230110467 A1) in view of Graefe et. al. (US 20190132709 A1, IDS). Regarding Claims 5 and 18 respectively, Modified Chaton discloses the method of Claim 4 and the apparatus of Claim 17. Furthermore, Chaton discloses the region-specific gap-filling message (See at least ¶0086-¶0088 and Figure 7-8) However, Modified Chaton does not explicitly disclose broadcasting/multicasting a region specific message to multiple OBUs within the region. Nevertheless, Graefe--who is directed towards sensor network enhancement mechanisms--discloses: further comprising multicasting or broadcasting the one or more (See at least ¶0070 via "In some embodiments, the messaging subsystem 307 generates an object list indicating all observed objects 64 in the coverage area 63, which is then broadcasted/multicast to all observed objects 64 in the coverage area 63"). Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the method and apparatus disclosed by Modified Chaton in view of the broadcasts/multicasts based on the specific region or area such as in Graefe, in order to provide information that is relevant to the OBUs/vehicles because they are in the specific area/region, wherein "using broadcast/multicast technologies may allow the infrastructure equipment 61 to reduce communication/signaling overhead" [Graefe ¶0070]. Claims 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chaton (EP 3761285 A1, IDS), Oshida et. al. (US 20160071418 A1), and Jha et. al. (US 20230110467 A1) in view of Sathyanarayana et. al. (US 20180365533 A1). Regarding Claims 8 and 20 respectively: Modified Chaton discloses the method of Claim 7 and the apparatus of Claim 19. However, Modified Chaton does not explicitly disclose the capability information as sensor parameter information. Nevertheless, Sathyanarayana--who is directed towards a system and method for contextualized vehicle operation determination--discloses: wherein the capability information associated with one or more of the plurality of OBUs comprises one or more camera parameters, one or more radio frequency (RF) sensor parameters, or a combination thereof (See at least ¶0014 via "…the onboard vehicle system preferably includes at least one outward facing (e.g., exterior-facing) camera 211…" and also "One or more intrinsic parameters for each camera (e.g., focal length, skew coefficient, pixel skew, image sensor format, principal point (e.g., optical center), distortion (e.g., radial distortion, tangential distortion) can be known (e.g., determined from the manufacturer, calibrated, etc.), estimated, or otherwise determined. One or more extrinsic parameters for each camera (e.g., pose, rotation, translation, heading, camera center position, etc. relative to a housing, another camera, a global reference point, a vehicle reference point, etc.) can be known (e.g., determined from the manufacturer, calibrated, etc.), estimated, or otherwise determined."). Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the method and apparatus disclosed by Modified Chaton in view of the determination of the sensor parameters as disclosed in Sathyanarayana in order to improve the "associat[ion of] external driving conditions with intrinsic vehicle operation" [Sathyanarayana ¶0011] because it is important to know the parameters of the sensors being used in order to accurately make decisions when operating a vehicle. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Chaton (EP 3761285 A1, IDS), Oshida et. al. (US 20160071418 A1), and Jha et. al. (US 20230110467 A1) in view of Filippou et. al. (US 20230074288 A1). Regarding Claim 14, Modified Chaton discloses the method of Claim 1. Furthermore, Modified Chaton discloses the gap-filling message (See at least Flowchart Figure 1 via S104: 'transmit excerpt') However, Modified Chaton does not explicitly disclose a subscription service or a request. Nevertheless, Filippou--who is directed towards V2X services for providing journey-specific quality-of-service predictions--discloses: wherein sending the (See at least ¶0159 via "The AMF 1412 provides communication and reachability services for other NFs and it may allow subscriptions to receive notifications regarding mobility events" and also at least ¶0129 via "In the procedures of FIG. 11, a service consumer (e.g., a VIS consumer such as a MEC app or a MEC platform) sends a request for information of a particular vUE (e.g., vUEs 101, 201, 401, 701, 801, 901 discussed previously). In response to the request, a service provider, which is a VIS (e.g., VIS 280 of FIG. 2, VIS provided by V2X APIs 9402 and 9412 of FIG. 9, and the like) generates a response including the requested information, and sends the response to the service consumer"). Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify the method disclosed by Modified Chaton in view of the subscription service and request from the OBU/user of a vUE (vehicle user equipment) in order to address "technical challenges related to security, processing/computing resources, network resources, service availability and efficiency, among many other issues". As, utilizing request-based service or subscription-based service allows for efficient distribution of services which is less computationally costly than providing service to all eligible OBUs/vehicles. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAYLA RENEE DOROS whose telephone number is (703)756-1415. The examiner can normally be reached Generally: M-F (8-5) EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.R.D./Examiner, Art Unit 3657 /ABBY LIN/Supervisory Patent Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Jul 25, 2023
Application Filed
Apr 09, 2025
Non-Final Rejection — §103
Jul 07, 2025
Response Filed
Aug 13, 2025
Final Rejection — §103
Oct 24, 2025
Response after Non-Final Action
Nov 05, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602048
TRAVEL ROUTE GENERATION METHOD FOR AUTONOMOUS VEHICLE AND CONTROL APPARATUS FOR AUTONOMOUS VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12576840
VEHICLE CONTROL DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12570012
ROBOT SYSTEM AND METHOD FOR CREATING VISUAL RECORD OF TASK PERFORMED IN WORKING AREA
2y 5m to grant Granted Mar 10, 2026
Patent 12566451
Interactive Detection of Obstacle Status in Mobile Robots
2y 5m to grant Granted Mar 03, 2026
Patent 12544925
ROBOT CONTROL SYSTEM, ROBOT CONTROL METHOD AND PROGRAM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
76%
With Interview (+2.8%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month