Prosecution Insights
Last updated: April 19, 2026
Application No. 18/136,119

DETECTION AND COMMUNICATION SYSTEM, CONTROL APPARATUS, AND DETECTION SYSTEM

Final Rejection §103
Filed
Apr 18, 2023
Examiner
QUIGLEY, KYLE ROBERT
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Huawei Technologies Co., Ltd.
OA Round
2 (Final)
54%
Grant Probability
Moderate
3-4
OA Rounds
3y 10m
To Grant
87%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
254 granted / 466 resolved
-13.5% vs TC avg
Strong +33% interview lift
Without
With
+32.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
72 currently pending
Career history
538
Total Applications
across all art units

Statute-Specific Performance

§101
20.7%
-19.3% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 466 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The rejections from the Office Action of 7/9/2025 are hereby withdrawn. New grounds for rejection are presented below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krill et al., Multifunction Array Lidar Network for Intruder Detection, Tracking, and Identification, IEEE, 2010 [hereinafter “Krill”] and Bon et al. (US 20120127025 A1)[hereinafter “Bon”]. Regarding Claim 17, Krill discloses a detection system, comprising: N radars, configured to separately search for a target, wherein N is an integer greater than 1 [Fig. 4, nodes N1 and N2], wherein a radar that is in the N radars and that finds the target is configured to feed back first information to a control apparatus, and the first information indicates a first distance between the target and the radar that finds the target; and a radar that is in the N radars and that does not find the target is configured to: receive a first instruction from the control apparatus, and adjust, based on the first instruction, searching in a region in which the target is located [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]. Krill fails to disclose that the radar that is in the N radars and that does not find the target is configured to: report a current searching region of the radar that is in the N radars and that does not find the target to the control apparatus, receives a deflection amount, and adjusts searching based on the deflection amount. However, Bon discloses the determination of a deflection amount of a radar from a desired target [See Fig. 3 and Paragraph [0038], the determination of Δθ between pointing direction 303 and target 301.]. It would have been obvious to determine such a deflection in adjusting searching for a target [Krill discloses the handoff of “intruder coordinates” in the 1st column of Page 15] in order to effectively detect the target with another radar system. Regarding Claim 18, Krill discloses that the detection system further comprises the control apparatus; and the control apparatus is configured to: receive the first information from the radar that finds the target; determine, based on the first information, the region in which the target is located; and generate the first instruction based on the region in which the target is located, and send the first instruction to the radar that does not find the target [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]. the control apparatus is configured to: receive the current searching region of the radar that is in the N radars and that does not find the target to the control apparatus; determine, based on the region in which the target is located and the current searching region of the radar that is in the N radars and that does not find the target to the control apparatus, a deflection amount; and generate the first instruction including the deflection amount. However, Bon discloses the determination of a deflection amount of a radar from a desired target [See Fig. 3 and Paragraph [0038], the determination of Δθ between pointing direction 303 and target 301.]. It would have been obvious to determine such a deflection in adjusting searching for a target [Krill discloses the handoff of “intruder coordinates” in the 1st column of Page 15] in order to effectively detect the target with another radar system. Regarding Claim 19, Krill discloses that each of the N radars corresponds to one feedback control component [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]; and the feedback control component is configured to: receive second information from the radar that finds the target, wherein the second information indicates a location relationship between the target and a central region of an electromagnetic wave transmitted by the radar that finds the target [Page 45, first column – “track state centered on the intruding object”Page 47, second column – “During each second from then on, a track-update beam is scheduled at the center of the detected beam pattern.”See Fig. 4.]; and generate a control instruction based on the second information, wherein the control instruction is used to instruct the central region of the electromagnetic wave transmitted by the radar that finds the target to align with the target [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]. Claim(s) 1, 2, 7-11, 13, 15-16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beguni et al., Toward a mixed visible light communications and ranging system for automotive applications, IEEE, 2019 [hereinafter “Beguni”]; Zhenjiu et al., A Modified Sequential Multilateration Scheme and Its Application in Geometric Error Measurement of Rotary Axis, Elsevier, 2015 [hereinafter “Zhenjiu”]; Krill et al., Multifunction Array Lidar Network for Intruder Detection, Tracking, and Identification, IEEE, 2010 [hereinafter “Krill”]; and Bon et al. (US 20120127025 A1)[hereinafter “Bon”]. Regarding Claim 1, Beguni discloses a detection and communication system [Abstract – “Thus, vehicular Visible Light Communications and Visible Light Ranging systems recently developed by our research groups are presented in order to set the path toward designing a mixed visible light communications and ranging system for automotive applications based on standard car lighting system.”], comprising: detectors/communicators [See Fig. 2 – “Fig. 2. OWC in vehicular applications: VLC are used to enable inter-vehicle data exchange, whereas VLR is used to determine the distances between the vehicles in a platoon.”Page 3, first column contemplates the use of FSOC lasers and LiDAR – “As opposed to standard VLC which uses diffuse light in order to fulfill the lighting function and to comply with the eye safety norms, FSOC often use narrow beam Laser Diode (LD) emission. Thus, long distances are further stimulated, and so, communication distances of few thousands of km can be achieved [14].In addition to communications purposes, optical wireless technologies are also suitable for localization and ranging applications. Thus, LiDAR is the most representative example for such an application. Similar with the Radar, LiDAR uses an emitted energy (i.e. usually narrow beam IR) which is reflected when it reaches an obstacle. Based on the analysis of the reflected beam (i.e. time of flight, angle of arrival, received signal strength), the system is able to determine the size of the object and also the distance to it. LiDAR applications include high accuracy 3D mapping in different domains, airborne surface scanning, or obstacle detection and localization in autonomous vehicles [3].”]. Although Beguni discusses triangulation [Page 4, first column – “Wu et al. proposed the first vehicular visible light ranging (VLR) using the reflection from target vehicle of a light beam emitted by an infrared laser diode and received by a CMOS image sensor [25]. Based on angles of transmission and arrival and the known distance between emitter and receiver, the target position is determined through trigonometric calculations via triangulation method. Experimental tests conducted on a highway at nightfall featured an accuracy less than 1.1% for distance estimates within the range of 5 - 45 m [26]. So, Roberts et al. extended the triangulation concept based on regular vehicular LED taillight as emitter from one vehicle and photodetector sensors on the other vehicle headlight, determining the relative positioning between vehicles [19], [26].”], Beguni fails to disclose that the system comprises: N radars, configured to: separately search for a target, and separately align with the target, wherein N is an integer greater than 1; and a first radar set is configured to communicate with the target, a second radar set is configured to track and point the target, the first radar set is K radars that are in the N radars and that align with the target, the second radar set is at least one radar other than the first radar in the N radars, and K is a positive integer less than N. However, Zhenjiu discloses a manner of using four laser sensors to determine the location of a point [See Fig. 1 and equations (1)/(2) and corresponding text.]. It would have been obvious to use such a type of system (for example, by equipping the communication-enabling light posts in Fig. 2 of Beguni with laser or LiDAR sensors) to determine the location of vehicles and facilitate laser-based communication with them because Beguni teaches that laser communicators are effective in establishing high speed communications [Page 3, first column – “FSOC are able to provide very high transfer rates which can range from few Gb/s to few hundreds of Gb/s [15], [16]. Based on such features, FSOC can provide support in numerous applications, from spatial and satellite applications to backhaul for cellular networks.”]. Beguni also fails to disclose that a radar that is in the N radars and that finds the target is configured to feed back first information to a control apparatus, and the first information indicates a first distance between the target and the radar that finds the target; the control apparatus configured to: determine, based on the first distance, a region in which the target is located, determine a current searching region of a radar that is in the N radars and that does not find the target, determine, based on the region in which the target is located and the current searching region of the radar that is in the N radars and that does not find the target, a deflection amount, and generate a first instruction including the deflection amount; and the radar that is in the N radars and that does not find the target is configured to: receive the first instruction from the control apparatus, and adjust, based on the deflection amount included in first instruction, to perform searching in the region in which the target is located. However, Krill discloses a system in which a radar that is in N radars and that finds a target [Fig. 4, nodes N1 and N2] is configured to feed back first information to a control apparatus, and the first information indicates a first distance between the target and the radar that finds the target; the control apparatus configured to: determine, based on the first distance, a region in which the target is located, determine a current searching region of a radar that is in the N radars and that does not find the target, and generate a first instruction; and the radar that is in the N radars and that does not find the target is configured to: receive the first instruction from the control apparatus, and adjust to perform searching in the region in which the target is located [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]. It would have been obvious to take such an approach in order to facilitate more quickly detecting objects with other LiDARs of the multi-LiDAR system. Beguni also fails to disclose that the control apparatus configured to: determine, based on the region in which the target is located and the current searching region of the radar that is in the N radars and that does not find the target, a deflection amount, and the radar that is in the N radars and that does not find the target is configured to: generate a first instruction including the deflection amount adjust, based on the deflection amount included in first instruction, to perform searching in the region in which the target is located. However, Bon discloses the determination of a deflection amount of a radar from a desired target [See Fig. 3 and Paragraph [0038], the determination of Δθ between pointing direction 303 and target 301.]. It would have been obvious to determine such a deflection in adjusting searching for a target [Krill discloses the handoff of “intruder coordinates” in the 1st column of Page 15] in order to effectively detect the target with another radar system. Regarding Claim 2, the combination would disclose that coverage regions of electromagnetic waves transmitted by the N radars overlap [See Fig. 1 of Zhenjiu]. Regarding Claim 7, Beguni and Zhenjiu fail to disclose, but Krill does disclose that the N radars are separately configured to: receive an echo signal reflected from a detection region [Page 47, first column – “For detection, we develop a 3D clutter map, storing whether an echo was received (a “1”) or not (a “0”) at each pulse resolution cell for each of the 450,000 beam positions (only partly updated per second, but all ranges and positions updated every 50 seconds).”]; determine, based on the received echo signal, point cloud data corresponding to the detection region [Page 47, second column – “If a significant portion of beam positions in this pattern receives echoes at about the same ranges, a detection is declared.”]; and feed back third information to a control apparatus, wherein the third information comprises the corresponding point cloud data [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]. It would have been obvious to take such an approach in order to facilitate more quickly detecting objects with other LiDARs of the multi-LiDAR system. Regarding Claim 8, the combination fails to disclose that the N radars are further separately configured to: receive a fourth instruction from the control apparatus, wherein the fourth instruction is used to indicate a region in which the target is located; and perform, based on the fourth instruction, searching in the region in which the target is located. However, Krill discloses a LiDAR system where one node uses detection of an object to facilitate the finding of the target by a second node [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]. It would have been obvious to take such an approach in order to facilitate more quickly detecting objects with other LiDARs of the multi-LiDAR system. Regarding Claim 9, the combination fails to disclose that each of the N radars corresponds to one feedback control component; and the feedback control component is configured to: receive fourth information from the radar that finds the target, wherein the fourth information indicates a location relationship between the target and a central region of an electromagnetic wave transmitted by the radar that finds the target; and generate a control instruction based on the fourth information, wherein the control instruction is used to instruct the central region of the electromagnetic wave transmitted by the radar that finds the target to align with the target. However, Krill discloses a LiDAR system where one node uses detection of an object to facilitate the finding of the target by a second node [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]. The tracking of the object is based on the object center [Page 45, first column – “track state centered on the intruding object”Page 47, second column – “During each second from then on, a track-update beam is scheduled at the center of the detected beam pattern.”See Fig. 4.] It would have been obvious to take such an approach in order to facilitate more quickly detecting objects with other LiDARs of the multi-LiDAR system. Regarding Claim 10, the combination would disclose that the second radar set is further configured to: send second information to a control apparatus, wherein the second information indicates a location of the target after movement [See equation (2) of Zhenjiu – “Δlij is the incremental distance of lj when P0 is moved.”]. Regarding Claim 11, the combination would disclose that the first radar set is further configured to: transmit a first electromagnetic wave to the target in a first time domain, wherein the first electromagnetic wave carries communication information; and transmit a second electromagnetic wave to the target in a second time domain, wherein the second electromagnetic wave is used to determine a first distance between the first radar set and the target [Use of a laser or LiDAR sensor for communication (1st time domain) and ranging (2nd time domain) per Beguni]; and the second radar set is further configured to: transmit a third electromagnetic wave to the target, wherein the third electromagnetic wave is used to determine a first distance between the second radar set and the target [Use of secondary laser or LiDAR sensors for position determination per Zhenjiu]. Regarding Claim 13, Beguni discloses that the detection and communications system further comprises the target [Fig. 2, cars] having a reflection component and the reflection component is configured to: reflect the second electromagnetic wave to obtain a second echo signal, and reflect the third electromagnetic wave to obtain a third echo signal [Fig. 1, bottom right picture, LiDAR reflecting off cars], and embodiments in which the target comprises a lens component [Page 4, second column – “A photodiode enhanced by a focusing lens is used as receiver and is connected to the processing card [30].”], and a third detector [Page 4, second column – “A photodiode enhanced by a focusing lens is used as receiver and is connected to the processing card [30].”], wherein the lens component is configured to converge a received first electromagnetic wave to the third detector [Page 4, second column – “A photodiode enhanced by a focusing lens is used as receiver and is connected to the processing card [30].”]; and the third detector is configured to demodulate the received first electromagnetic wave to obtain the communication information [Page 2, first column – “Every LED light source can be upgraded to become a data broadcasting system by adding an intensity modulation (IM) component, for example. At the receiver side, photosensitive elements (usually PIN photodiodes) are used to extract the data from the incident modulated light based on direct detection (DD).”]. It would have been obvious to use such hardware in order to facilitate visible light communications because doing so would have allowed for enhancing the received signal strength. Regarding Claim 15, Beguni discloses a control apparatus, comprising a processor, a transmitter, and a receiver [See Fig. 3, vehicle VLC-R system including emitter, receiver, and inherent processor. See the communication-enabling light posts in Fig. 2.See Fig. 2 – “Fig. 2. OWC in vehicular applications: VLC are used to enable inter-vehicle data exchange, whereas VLR is used to determine the distances between the vehicles in a platoon.”Page 3, first column contemplates the use of FSOC lasers and LiDAR – “As opposed to standard VLC which uses diffuse light in order to fulfill the lighting function and to comply with the eye safety norms, FSOC often use narrow beam Laser Diode (LD) emission. Thus, long distances are further stimulated, and so, communication distances of few thousands of km can be achieved [14].In addition to communications purposes, optical wireless technologies are also suitable for localization and ranging applications. Thus, LiDAR is the most representative example for such an application. Similar with the Radar, LiDAR uses an emitted energy (i.e. usually narrow beam IR) which is reflected when it reaches an obstacle. Based on the analysis of the reflected beam (i.e. time of flight, angle of arrival, received signal strength), the system is able to determine the size of the object and also the distance to it. LiDAR applications include high accuracy 3D mapping in different domains, airborne surface scanning, or obstacle detection and localization in autonomous vehicles [3].”]. Although Beguni discusses triangulation [Page 4, first column – “Wu et al. proposed the first vehicular visible light ranging (VLR) using the reflection from target vehicle of a light beam emitted by an infrared laser diode and received by a CMOS image sensor [25]. Based on angles of transmission and arrival and the known distance between emitter and receiver, the target position is determined through trigonometric calculations via triangulation method. Experimental tests conducted on a highway at nightfall featured an accuracy less than 1.1% for distance estimates within the range of 5 - 45 m [26]. So, Roberts et al. extended the triangulation concept based on regular vehicular LED taillight as emitter from one vehicle and photodetector sensors on the other vehicle headlight, determining the relative positioning between vehicles [19], [26].”], Beguni fails to disclose that: the receiver is configured to receive first information from a radar that finds a target, wherein the first information indicates a first distance between the target and the radar that finds the target; and receive a current searching region of a radar that does not find the target; the processor is configured to: determine, based on the first information, a region in which the target is located, and generate a first instruction based on the region in which the target is located; and the transmitter is configured to send the first instruction to a radar that does not find the target, wherein the first instruction is used to instruct the radar that does not find the target to perform searching in the region in which the target is located. However, Zhenjiu discloses a manner of using four laser sensors to determine the location of a point [See Fig. 1 and equations (1)/(2) and corresponding text.]. It would have been obvious to use such a type of system (for example, by equipping the communication-enabling light posts in Fig. 2 of Beguni with laser or LiDAR sensors) to determine the location of vehicles and facilitate laser-based communication with them because Beguni teaches that laser communicators are effective in establishing high speed communications [Page 3, first column – “FSOC are able to provide very high transfer rates which can range from few Gb/s to few hundreds of Gb/s [15], [16]. Based on such features, FSOC can provide support in numerous applications, from spatial and satellite applications to backhaul for cellular networks.”]. Krill discloses a LiDAR system where one node uses detection of an object to facilitate the finding of the target by a second node [See Fig. 4 and Page 45, particularly – “The monitor and neighboring nodes (e.g., node N2) receive periodic track updates from the tracking node, N1, once per second. This cueing information, together with a priori knowledge of the node positions and their respective sensor orientations, is then used for the track handover process. The neighboring node N2 monitors the track, and when the intruder is determined to be sufficiently close for possible detection, an autonomous attempt is made to acquire the target.”]. It would have been obvious to take such an approach in order to facilitate more quickly detecting objects with other LiDARs of the multi-LiDAR system. Beguni also fails to disclose the receiver is configured to: determine, based on the region in which the target is located and the current searching region of the radar that does not find the target, a deflection amount, and generate a first instruction comprising the deflection amount; and the transmitter is configured to adjust, based on the deflection amount, to perform searching in the region in which the target is located. However, Bon discloses the determination of a deflection amount of a radar from a desired target [See Fig. 3 and Paragraph [0038], the determination of Δθ between pointing direction 303 and target 301.]. It would have been obvious to determine such a deflection in adjusting searching for a target [Krill discloses the handoff of “intruder coordinates” in the 1st column of Page 15] in order to effectively detect the target with another radar system. Regarding Claim 16, the combination would fail to disclose that: the processor is further configured to determine that a radar that does not align with the target exists in a first radar set, wherein the first radar set is K radars that are in N radars, N is an integer greater than 2, and N-K is an integer greater than or equal to 2; and the transmitter is further configured to transmit a second instruction to a second radar set including at least one radar other than the first radar set in the N radars, wherein the second instruction is used to instruct the second radar set to communicate with the target. However, Zhenjiu discloses a manner of using four laser sensors to determine the location of a point [See Fig. 1 and equations (1)/(2) and corresponding text.]. It would have been obvious to use such a type of system (for example, by equipping the communication-enabling light posts in Fig. 2 of Beguni with laser or LiDAR sensors) to determine the location of vehicles and facilitate laser-based communication with them because Beguni teaches that laser communicators are effective in establishing high speed communications [Page 3, first column – “FSOC are able to provide very high transfer rates which can range from few Gb/s to few hundreds of Gb/s [15], [16]. Based on such features, FSOC can provide support in numerous applications, from spatial and satellite applications to backhaul for cellular networks.”]. Doing so would disclose the use of N radars (4 per Zhenjiu). In the event that some of the radars do not align with the communication target (K radars, being less than 4) it would have been obvious to use radars that do align with the communication target (a second radar set) to perform communication because alignment is necessary for the communication scheme of Beguni to function [Page 1, second column – “The most important advantage of OWC is represented by their ability to provide extremely high data rates while using rather simple equipment and an unlicensed spectrum. On the downside, OWC suffer from limitations imposed by communication range and Line-of-Sight (LoS) requirements.”]. Regarding Claim 20, Krill discloses a second radar is configured to track and point the target, the second radar is a radar other than the first radar in the N radars [Fig. 4, node N2], but fails to disclose that a first radar in the N radars is configured to communicate with the target, the first radar is K radars that are in the N radars and that align with the target, and K is a positive integer less than N. However, Beguni discloses a detection and communication system [Abstract – “Thus, vehicular Visible Light Communications and Visible Light Ranging systems recently developed by our research groups are presented in order to set the path toward designing a mixed visible light communications and ranging system for automotive applications based on standard car lighting system.”], comprising detectors/communicators [See Fig. 2 – “Fig. 2. OWC in vehicular applications: VLC are used to enable inter-vehicle data exchange, whereas VLR is used to determine the distances between the vehicles in a platoon.”Page 3, first column contemplates the use of FSOC lasers and LiDAR – “As opposed to standard VLC which uses diffuse light in order to fulfill the lighting function and to comply with the eye safety norms, FSOC often use narrow beam Laser Diode (LD) emission. Thus, long distances are further stimulated, and so, communication distances of few thousands of km can be achieved [14].In addition to communications purposes, optical wireless technologies are also suitable for localization and ranging applications. Thus, LiDAR is the most representative example for such an application. Similar with the Radar, LiDAR uses an emitted energy (i.e. usually narrow beam IR) which is reflected when it reaches an obstacle. Based on the analysis of the reflected beam (i.e. time of flight, angle of arrival, received signal strength), the system is able to determine the size of the object and also the distance to it. LiDAR applications include high accuracy 3D mapping in different domains, airborne surface scanning, or obstacle detection and localization in autonomous vehicles [3].”]. It would have been obvious to use the detection scheme of Krill in implementing such a communication system in order to be able to reliably communicate with a communication target. Zhenjiu discloses a manner of using four laser sensors to determine the location of a point [See Fig. 1 and equations (1)/(2) and corresponding text.]. It would have been obvious to use such a type of system (for example, by equipping the communication-enabling light posts in Fig. 2 of Beguni with laser or LiDAR sensors) to determine the location of vehicles and facilitate laser-based communication with them because Beguni teaches that laser communicators are effective in establishing high speed communications [Page 3, first column – “FSOC are able to provide very high transfer rates which can range from few Gb/s to few hundreds of Gb/s [15], [16]. Based on such features, FSOC can provide support in numerous applications, from spatial and satellite applications to backhaul for cellular networks.”]. Claim(s) 4-6 and 21-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beguni et al., Toward a mixed visible light communications and ranging system for automotive applications, IEEE, 2019 [hereinafter “Beguni”]; Zhenjiu et al., A Modified Sequential Multilateration Scheme and Its Application in Geometric Error Measurement of Rotary Axis, Elsevier, 2015 [hereinafter “Zhenjiu”]; Krill et al., Multifunction Array Lidar Network for Intruder Detection, Tracking, and Identification, IEEE, 2010 [hereinafter “Krill”]; Bon et al. (US 20120127025 A1)[hereinafter “Bon”]; and Ahmed et al. (US 20120122475 A1)[hereinafter “Ahmed”]. Regarding Claims 4 and 21, Zhenjiu fails to disclose that, if one radar finds the target, the first instruction is used to instruct the radar that does not find the target to perform searching on a spherical surface whose sphere center is the radar that finds the target and whose radius is the first distance. However, Ahmed discloses the determination of the location of an object by drawing circles around receivers based on their corresponding detection radii [See Figs. 4 and 8.Paragraph [0071] – “At step S110, the range between the AP and each of the four MPs and is calculated.”Paragraph [0073] – “At step S120, a set of intersecting points is calculated using the multi-lateration method. The range d.sub.ai between the AP AP.sub.a and the MP MP.sub.i, calculated at step S110, may be used to draw a circle around the position of the MP MP.sub.i.”]. It would have been obvious to take such an approach in determining where to search for a target in order to reduce the amount of searching time. It would have been obvious to extend the circle-drawing to sphere-drawing in order to facilitate searching in three dimensional space. Regarding Claim 5, Zhenjiu discloses that N is an integer greater than 2 [Fig. 1, four laser sensors], but fails to disclose that if two radars find the target, the first instruction is used to instruct the radar that does not find the target to perform searching on a junction line of spherical surfaces corresponding to the two radars that find the target, wherein a spherical surface corresponding to the radar that finds the target is a spherical surface whose sphere center is the radar that finds the target and whose radius is the first distance. However, Ahmed discloses the determination of the location of an object by drawing circles around receivers based on their corresponding detection radii to determine intersection points (the intersection points being the equivalent of the recited “junction line”)[See Figs. 4 and 8.Paragraph [0071] – “At step S110, the range between the AP and each of the four MPs and is calculated.”Paragraph [0073] – “At step S120, a set of intersecting points is calculated using the multi-lateration method. The range d.sub.ai between the AP AP.sub.a and the MP MP.sub.i, calculated at step S110, may be used to draw a circle around the position of the MP MP.sub.i.”]. It would have been obvious to take such an approach in determining where to search for a target in order to reduce the amount of searching time. It would have been obvious to extend the circle-drawing to sphere-drawing (in which the intersection points would take the form of a junction line) in order to facilitate searching in three dimensional space. Regarding Claim 6, Zhenjiu discloses that N is an integer greater than 3 [Fig. 1, four laser sensors], but fails to disclose that, if at least three radars find the target, the first instruction is used to instruct the radar that does not find the target to perform searching at an intersection point of spherical surfaces corresponding to the at least three radars that find the target, wherein a spherical surface corresponding to the radar that finds the target is a spherical surface whose sphere center is the radar that finds the target and whose radius is the first distance. However, Ahmed discloses the determination of the location of an object by drawing circles around receivers based on their corresponding detection radii [See Figs. 4 and 8.Paragraph [0071] – “At step S110, the range between the AP and each of the four MPs and is calculated.”Paragraph [0073] – “At step S120, a set of intersecting points is calculated using the multi-lateration method. The range d.sub.ai between the AP AP.sub.a and the MP MP.sub.i, calculated at step S110, may be used to draw a circle around the position of the MP MP.sub.i.”]. It would have been obvious to take such an approach in determining where to search for a target in order to reduce the amount of searching time. It would have been obvious to extend the circle-drawing to sphere-drawing in order to facilitate searching in three dimensional space. Regarding Claim 22, Zhenjiu discloses that the radar that finds the target comprises a first radar [Fig. 1, four laser sensors], but fails to disclose that the receiver is further configured to: receive second information from a second radar that finds the target, wherein the second information indicates a second distance between the target and the radar that finds the target; and wherein the first instruction is used to instruct the radar that does not find the target to perform searching on a junction line of a first spherical surface corresponding to the first radar and a second spherical surface corresponding to the second radar, the first spherical surface is a spherical surface whose sphere center is the first radar that finds the target and whose radius is the first distance, and the second spherical surface is a spherical surface whose sphere center is the second radar that finds the target and whose radius is the second distance. However, Ahmed discloses the determination of the location of an object by drawing circles around receivers based on their corresponding detection radii to determine intersection points (the intersection points being the equivalent of the recited “junction line”)[See Figs. 4 and 8.Paragraph [0071] – “At step S110, the range between the AP and each of the four MPs and is calculated.”Paragraph [0073] – “At step S120, a set of intersecting points is calculated using the multi-lateration method. The range d.sub.ai between the AP AP.sub.a and the MP MP.sub.i, calculated at step S110, may be used to draw a circle around the position of the MP MP.sub.i.”]. It would have been obvious to take such an approach in determining where to search for a target in order to reduce the amount of searching time. It would have been obvious to extend the circle-drawing to sphere-drawing (in which the intersection points would take the form of a junction line) in order to facilitate searching in three dimensional space. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beguni et al., Toward a mixed visible light communications and ranging system for automotive applications, IEEE, 2019 [hereinafter “Beguni”]; Zhenjiu et al., A Modified Sequential Multilateration Scheme and Its Application in Geometric Error Measurement of Rotary Axis, Elsevier, 2015 [hereinafter “Zhenjiu”]; Krill et al., Multifunction Array Lidar Network for Intruder Detection, Tracking, and Identification, IEEE, 2010 [hereinafter “Krill”]; Bon et al. (US 20120127025 A1)[hereinafter “Bon”]; and Mizui et al., VEHICLE-TO-VEHICLE COMMUNICATION AND RANGING SYSTEM USING SPREAD SPECTRUM TECHNIQUE, IEEE, 1993 [hereinafter “Mizui”]. Regarding Claim 12, Beguni discloses that the first radar is further configured to: encode communication information to obtain a communication code [Page 2, first column – “Every LED light source can be upgraded to become a data broadcasting system by adding an intensity modulation (IM) component, for example. At the receiver side, photosensitive elements (usually PIN photodiodes) are used to extract the data from the incident modulated light based on direct detection (DD).”]; but fails to disclose combining the communication code and a ranging code, and modulating the combined communication code and ranging code onto a to-be-transmitted electromagnetic wave to obtain a fourth electromagnetic wave; and transmitting the fourth electromagnetic wave to the target. However, Mizui discloses a communication scheme in which communications are modulated with a ranging code to allow for simultaneous communication and ranging [Page 335, second column – “In the proposed Boomerang Transmission System, Vehicle-#2 multiples its own information by PN code signal which is transmitted from Vehicle-#1, and retransmits to Vehicle-#1. Vehicle-#1 demodulates the signal from Vehicle-#2 by the SS technique with the PN code which Vehicle-#1 owns. Then it can know the information of Vehicle-#2 and range distance between Vehicle-#l and Vehicle-#2 at the same time. So, the proposed system can be called Boomerang Transmission System because the PN code sequence is transmitted from Vehicle-#1 and return there with the information of Vehicle-#2 like a boomerang.”]. It would have been obvious to implement such an approach in order to facilitate communication and ranging at the same time. Response to Arguments Applicant argues: PNG media_image1.png 203 757 media_image1.png Greyscale Examiner’s Response: The corresponding rejections are hereby withdrawn. Applicant argues: PNG media_image2.png 199 756 media_image2.png Greyscale Examiner’s Response: The Examiner agrees. New grounds for rejection are presented above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Son et al., A Lightweight and Cost-Effective 3D Omnidirectional Depth Sensor Based on Laser Triangulation, IEEE, 2019 Zhang et al., Modelling and optimization of novel laser multilateration schemes for high-precision applications, Meas. Sci. Technol., 2005 US 20220075038 A1 – APPARATUS AND METHODS FOR LONG RANGE, HIGH RESOLUTION LIDAR US 20220207761 A1 – AGILE DEPTH SENSING USING TRIANGULATION LIGHT CURTAINS US 20210080562 A1 – METHOD FOR SIGNAL EVALUATION IN A LOCATING SYSTEM THAT INCLUDES MULTIPLE RADAR SENSORS US 4963017 A – Variable Depth Range Camera US 20190049572 A1 – SYSTEMS AND METHODS FOR DOPPLER-ENHANCED RADAR TRACKING US 20190250262 A1 – METHODS FOR THE DETERMINATION OF A BOUNDARY OF A SPACE OF INTEREST USING RADAR SENSORS US 20030142007 A1 – Signal Processing Method For Scanning Radar US 5629705 A – High Range Resolution Radar System US 11153010 B2 – Lidar Based Communication Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ROBERT QUIGLEY whose telephone number is (313)446-4879. The examiner can normally be reached 11AM-9PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arleen Vazquez can be reached at (571) 272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R QUIGLEY/Primary Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Apr 18, 2023
Application Filed
May 10, 2023
Response after Non-Final Action
Jul 06, 2025
Non-Final Rejection — §103
Sep 10, 2025
Response Filed
Sep 16, 2025
Final Rejection — §103
Nov 30, 2025
Interview Requested
Dec 09, 2025
Applicant Interview (Telephonic)
Dec 09, 2025
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601396
PREDICTIVE MODELING OF HEALTH OF A DRIVEN GEAR IN AN OPEN GEAR SET
2y 5m to grant Granted Apr 14, 2026
Patent 12566218
BATTERY PACK MONITORING DEVICE
2y 5m to grant Granted Mar 03, 2026
Patent 12566162
AUTOMATED CONTAMINANT SEPARATION IN GAS CHROMATOGRAPHY
2y 5m to grant Granted Mar 03, 2026
Patent 12523698
Battery Management Apparatus and Method
2y 5m to grant Granted Jan 13, 2026
Patent 12509981
Parametric Attribute of Pore Volume of Subsurface Structure from Structural Depth Map
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
54%
Grant Probability
87%
With Interview (+32.7%)
3y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 466 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month