Prosecution Insights
Last updated: April 19, 2026
Application No. 18/642,435

SELECTIVE SENSOR ACTIVATION BASED ON MULTI-CHIRP FMCW RADAR

Non-Final OA §103
Filed
Apr 22, 2024
Examiner
BOCAR, DONNA V
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
2y 7m
To Grant
77%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
212 granted / 367 resolved
-4.2% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
402
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
56.8%
+16.8% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
15.1%
-24.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1, 15, and 20 have been amended. No claims have been cancelled or added. Claims 1-20 are currently under review. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 27, 2025 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5-6, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo et al. (Patent No.: US 11,221,404 B1) hereinafter referred to as DeSalvo1, in view of Zhang et al. (Pub. No.: US 2022/0283296 A1), in view of Kumar Y.B. et al. (Pub. No.: US 2016/0327633 A1) hereinafter referred to as Kumar Y.B. With respect to Claim 1, DeSalvo1 teaches a system (fig. 11, item 1100 or fig. 12, item 1200; column 25, lines 15-26), comprising: a radar-based tracking system (column 2, lines 39-52) configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp and a low-bandwidth chirp (fig. 6, items 601(1), 601(3), 601(n): high bandwidth; items 601(2) and 601(4): low bandwidth; column 17, lines 41-51; column 18, lines 23-28); an image-based tracking system comprising one or more image sensor and one or more processing modules (column 28, line 66 to column 29, line 11, “augmented-reality system 1100, and/or virtual-reality system 1200 may include one or more optical sensors, such as two-dimensional (2D) or three-dimensional (3D) cameras, time-of-flight depth sensors, single-beam or sweeping laser rangefinders, 3D LiDAR sensors, and/or any other suitable type or form of optical sensor” – LiDAR sensors comprise processing modules); one or more processors (column 3, lines 61-66); and one or more computer-readable recording media (column 17, lines 30-37) that store instructions that are executable by the one or more processors to configure the system to: obtain, via the radar-based tracking system, radar-based measurement data, wherein the radar-based measurement data comprises or is based on a reflected multi-chirp FMCW radar signal comprising a reflected high-bandwidth chirp and a reflected low-bandwidth chirp (column 2, lines 39-62; column 16, lines 55-62; column 19, lines 27-38). DeSalvo1 does not teach utilize the radar-based measurement data to generate event detection output and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system by causing the one or more processing modules to perform feature extraction on imagery captured via the one or more image sensors to acquire image-based tracking data to facilitate positional tracking of an object. Zhang teaches a system (fig. 1, item 170 of user device 107 = item 200 of fig. 2; ¶46, “the user device 107 can include a mobile phone, router, tablet computer, laptop computer, tracking device, wearable device (e.g., a smart watch, glasses, an XR device, etc.)”; ¶54), comprising: a radar-based tracking system (fig. 2, items 204, 206, 208, and 210; ¶55-56; ¶62, “TX waveform 216 can include a chirp signal, as used, for example, in a Frequency-Modulated Continuous-Wave (FM-CW) radar system”) configured to emit a high-bandwidth chirp (¶107, “In some cases, the device can implement a high-resolution RF sensing algorithm (e.g., with a high bandwidth, a high number of spatial links, and a high sampling rate as compared to the mid-resolution RF sensing algorithm). The high-resolution RF sensing algorithm can differ from the mid-resolution RF sensing algorithm by having a higher bandwidth, a higher number of spatial links, a higher sampling rate, or any combination thereof”) and a low-bandwidth chirp (¶90, “the device can adjust the level of RF sensing resolution by modifying the number of spatial links (e.g., adjusting number of spatial streams and/or number of receive antennas) as well as the bandwidth and the sampling frequency. In some cases, the device can implement a low-resolution RF sensing algorithm (e.g., with a relatively low bandwidth, low number of spatial links, and low sampling rate), which consumes a small amount of power and can operate in the background when the device is in the locked or sleep state”); an image based tracking system (fig. 1, item 172) comprising an image sensors and a processing module (¶119, “In another example, input image 802 can be obtained by a camera (e.g., input devices 172) of the wireless device. In another example, input image 802 can be obtained by using a LIDAR sensor (e.g., communications interface 1240) of the wireless device” – LiDAR sensor comprises a processing module); one or more processors (fig. 1, item 184; ¶47); and a computer-readable recording media (fig. 1, item 186; ¶52-53) that store instructions that are executable by the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output and when the event detection output satisfies one or more conditions (¶90, “the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms… the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms”), selectively activate the image-based tracking system (¶92, “If motion is detected at block 406, the process 400 can proceed to block 410 and initiate facial authentication. In some examples, facial recognition can be performed by using an RF interface that is capable of transmitting extremely high frequency (EHF) signals or mmWave technology”; ¶94, “the device may use an infrared (IR) light source, dot projector, or other light source to illuminate a user's face and an IR camera or other image capture device to perform image capture”) by causing the one or more processing modules to perform feature extraction on imagery captured via the one or more image sensors (¶120) to acquire image-based tracking data to facilitate positional tracking of an object (¶121, “several images can be captured of the owner or user with different poses, positions, facial expressions, lighting conditions, and/or other characteristics”; ¶123-125). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of DeSalvo1, to utilize the radar-based measurement data to generate event detection output and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system by causing the one or more processing modules to perform feature extraction on imagery captured via the one or more image sensors to acquire image-based tracking data to facilitate positional tracking of an object, as taught by Zhang so as to reduce resource usage (¶35). DeSalvo1 and Zhang combined do not explicitly mention wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp, and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance. Kumar Y.B. teaches a system (fig. 6; ¶24), comprising: a radar-based tracking system configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (fig. 5, item B2: high-bandwidth; ¶22; ¶24) and a low-bandwidth chirp (fig. 5, item B1: low-bandwidth; ¶22; ¶24); and one or more processors (fig. 6, items 602 and 606), the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output (¶25, “The processing unit 606 may also include functionality to perform post processing of information about the detected objects, such as tracking objects, determining rate and direction of movement, etc”), wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp (fig. 7; ¶23, “Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”), and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system (¶22, “Chirp configuration 2 illustrates a chirp with a higher bandwidth B2 than configuration 1, which provides higher accuracy for detecting closer objects at higher range resolutions” – a threshold distance, is the distance for detecting closer objects at higher resolutions corresponding to bandwidth B2) and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance (¶22, “Chirp configuration 1 illustrates a typical chirp at a bandwidth B1 that may be repeated multiple times to capture distance, velocity and angle of arrival of objects” – outside of the threshold distance, is the distance for detecting farther objects corresponding to bandwidth B1). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of modify the combined system of DeSalvo1 and Zhang, to utilize the transmitters and receiver (fig. 6, item 604; ¶24) of Kumar Y.B. in place of the transmitters, receivers, and responders of DeSalvo1, resulting in wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp, and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance, as taught by Kumar Y.B. so as to prevent losing important information about objects within view of the radar (¶23). With respect to Claim 5, claim 1 is incorporated, DeSalvo1 and Zhang combined do not mention wherein the high-bandwidth chirp and the low-bandwidth chirp are interleaved to form the multi-chirp FMCW radar signal. Kumar Y.B. teaches a system (fig. 6; ¶24), comprising: a radar-based tracking system configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (fig. 5, item B2: high-bandwidth; ¶22; ¶24) and a low-bandwidth chirp (fig. 5, item B1: low-bandwidth; ¶22; ¶24); and one or more processors (fig. 6, items 602 and 606), the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output (¶25, “The processing unit 606 may also include functionality to perform post processing of information about the detected objects, such as tracking objects, determining rate and direction of movement, etc”), wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp (fig. 7; ¶23, “Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”), and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system (¶22, “Chirp configuration 2 illustrates a chirp with a higher bandwidth B2 than configuration 1, which provides higher accuracy for detecting closer objects at higher range resolutions” – a threshold distance, is the distance for detecting closer objects at higher resolutions corresponding to bandwidth B2) and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance (¶22, “Chirp configuration 1 illustrates a typical chirp at a bandwidth B1 that may be repeated multiple times to capture distance, velocity and angle of arrival of objects” – outside of the threshold distance, is the distance for detecting farther objects corresponding to bandwidth B1); wherein the high-bandwidth chirp and the low-bandwidth chirp are interleaved to form the multi-chirp FMCW radar signal (figs. 6-7; ¶23, “In the prior art, chirp configurations such as these may be applied in different frames, which may result in losing important information about objects within view of the radar due to delays caused by using multiple frames to extract the needed information. Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of modify the combined system of DeSalvo1 and Zhang, to utilize the transmitters and receiver (fig. 6, item 604; ¶24) of Kumar Y.B. in place of the transmitters, receivers, and responders of DeSalvo1, resulting in wherein the high-bandwidth chirp and the low-bandwidth chirp are interleaved to form the multi-chirp FMCW radar signal, as taught by Kumar Y.B. so as to reduce the time needed to extract object information (¶23). With respect to Claim 6, claim 1 is incorporated, DeSalvo1 and Zhang combined do not mention wherein the multi-chirp FMCW radar signal is emitted by a single radar transmitter. Kumar Y.B. teaches a system (fig. 6; ¶24), comprising: a radar-based tracking system configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (fig. 5, item B2: high-bandwidth; ¶22; ¶24) and a low-bandwidth chirp (fig. 5, item B1: low-bandwidth; ¶22; ¶24); and one or more processors (fig. 6, items 602 and 606), the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output (¶25, “The processing unit 606 may also include functionality to perform post processing of information about the detected objects, such as tracking objects, determining rate and direction of movement, etc”), wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp (fig. 7; ¶23, “Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”), and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system (¶22, “Chirp configuration 2 illustrates a chirp with a higher bandwidth B2 than configuration 1, which provides higher accuracy for detecting closer objects at higher range resolutions” – a threshold distance, is the distance for detecting closer objects at higher resolutions corresponding to bandwidth B2) and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance (¶22, “Chirp configuration 1 illustrates a typical chirp at a bandwidth B1 that may be repeated multiple times to capture distance, velocity and angle of arrival of objects” – outside of the threshold distance, is the distance for detecting farther objects corresponding to bandwidth B1); wherein the multi-chirp FMCW radar signal is emitted by a single radar transmitter (fig. 6, item 604; ¶23, “In the prior art, chirp configurations such as these may be applied in different frames, which may result in losing important information about objects within view of the radar due to delays caused by using multiple frames to extract the needed information. Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”; ¶24, “The radar front end 604 includes functionality to transmit and receive a frame of chirps. This functionality may include, for example, one or more transmitters, one or more receivers, a timing engine, a frequency synthesizer, and storage, e.g., registers, for two or more chirp profile buffers.”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of modify the combined system of DeSalvo1 and Zhang, to utilize the transmitters and receiver (fig. 6, item 604; ¶24) of Kumar Y.B. in place of the transmitters, receivers, and responders of DeSalvo1, resulting in wherein the multi-chirp FMCW radar signal is emitted by a single radar transmitter, as taught by Kumar Y.B. so as to reduce the time needed to extract object information (¶23). With respect to Claim 13, claim 1 is incorporated, DeSalvo1 and Kumar Y.B. combined do not teach wherein the instructions are executable by the one or more processors to configure the system to: when the event detection output fails to satisfy the one or more conditions, selectively refrain from activating the image-based tracking system. Zhang teaches a system (fig. 1, item 170 of user device 107 = item 200 of fig. 2; ¶46, “the user device 107 can include a mobile phone, router, tablet computer, laptop computer, tracking device, wearable device (e.g., a smart watch, glasses, an XR device, etc.)”; ¶54), comprising: a radar-based tracking system (fig. 2, items 204, 206, 208, and 210; ¶55-56; ¶62, “TX waveform 216 can include a chirp signal, as used, for example, in a Frequency-Modulated Continuous-Wave (FM-CW) radar system”) configured to emit a high-bandwidth chirp (¶107, “In some cases, the device can implement a high-resolution RF sensing algorithm (e.g., with a high bandwidth, a high number of spatial links, and a high sampling rate as compared to the mid-resolution RF sensing algorithm). The high-resolution RF sensing algorithm can differ from the mid-resolution RF sensing algorithm by having a higher bandwidth, a higher number of spatial links, a higher sampling rate, or any combination thereof”) and a low-bandwidth chirp (¶90, “the device can adjust the level of RF sensing resolution by modifying the number of spatial links (e.g., adjusting number of spatial streams and/or number of receive antennas) as well as the bandwidth and the sampling frequency. In some cases, the device can implement a low-resolution RF sensing algorithm (e.g., with a relatively low bandwidth, low number of spatial links, and low sampling rate), which consumes a small amount of power and can operate in the background when the device is in the locked or sleep state”); an image based tracking system (fig. 1, item 172) comprising an image sensors and a processing module (¶119, “In another example, input image 802 can be obtained by a camera (e.g., input devices 172) of the wireless device. In another example, input image 802 can be obtained by using a LIDAR sensor (e.g., communications interface 1240) of the wireless device” – LiDAR sensor comprises a processing module); one or more processors (fig. 1, item 184; ¶47); and a computer-readable recording media (fig. 1, item 186; ¶52-53) that store instructions that are executable by the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output and when the event detection output satisfies one or more conditions (¶90, “the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms… the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms”), selectively activate the image-based tracking system (¶92, “If motion is detected at block 406, the process 400 can proceed to block 410 and initiate facial authentication. In some examples, facial recognition can be performed by using an RF interface that is capable of transmitting extremely high frequency (EHF) signals or mmWave technology”; ¶94, “the device may use an infrared (IR) light source, dot projector, or other light source to illuminate a user's face and an IR camera or other image capture device to perform image capture”) by causing the one or more processing modules to perform feature extraction on imagery captured via the one or more image sensors (¶120) to acquire image-based tracking data to facilitate positional tracking of an object (¶121, “several images can be captured of the owner or user with different poses, positions, facial expressions, lighting conditions, and/or other characteristics”; ¶123-125); wherein the instructions are executable by the one or more processors to configure the system to: when the event detection output fails to satisfy the one or more conditions, selectively refrain from activating the image-based tracking system (¶91, “If no motion is detected, the process 400 can proceed to block 408 in which the device remains in a locked state and continues to perform RF sensing in order to detect motion”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of DeSalvo1 and Kumar Y.B., wherein the instructions are executable by the one or more processors to configure the system to: when the event detection output fails to satisfy the one or more conditions, selectively refrain from activating the image-based tracking system, as taught by Zhang so as to reduce resource usage (¶35). Claims 2, 8, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo1, Zhang, and Kumar Y.B. as applied to claim 1 above, and further in view of DeSalvo et al. (Patent No.: US 11,747,462 B1) hereinafter referred to as DeSalvo2. With respect to Claim 2, claim 1 is incorporated, DeSalvo1, Zhang, and Kumar Y.B. combined do not teach wherein the one or more conditions comprise the event detection output indicating changing of a pose of the object. DeSalvo2 teaches a system (fig. 5, item 500; fig. 6, item 600; fig. 12 item 1200; column 5, lines 31-44), comprising: a radar-based tracking system (fig. 6, item 650 and 660 = items 510(A) and 510(B) in fig. 5); an image-based tracking system (column 23, lines 27-32; “augmented-reality system 1200 and/or virtual-reality system 1300 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras”); one or more processors (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); and one or more computer-readable recording media (column 21, lines 38-42; “These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 1200”); the system configured to: obtain, via the radar-based tracking system, radar-based measurement data (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); utilize the radar-based measurement data as input to an event detection module to generate event detection output (column 3, lines 43-47 and lines 60-67); wherein the one or more conditions comprise the event detection output indicating changing of a pose of the object (column 3, lines 5-10, “The disclosed radar systems may determine the range of a variety of types of targets. In one example, a radar system may determine the range of passive targets (e.g., targets that simply reflect signals and do not actively transmit signals). Examples of passive targets may include a body part of a user”; column 4, lines 8-15, “these radar systems may be utilized in applications involving the control of an apparatus (such as an electronic device, a data input mechanism, a piece of machinery, a vehicle, etc.) using one or more body parts or gestures”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo1, Zhang, and Kumar Y.B., wherein the one or more conditions comprise the event detection output indicating changing of a pose of the object, as taught by DeSalvo2 so as to facilitate modification of one or more virtual component (column 3, lines 60-67). With respect to Claim 8, claim 1 is incorporated, DeSalvo1, Zhang, and Kumar Y.B. combined do not mention wherein the low-bandwidth chirp comprises a higher transmission power than the high-bandwidth chirp (column 11, line 65 to column 12, line 3). DeSalvo2 teaches a system (fig. 5, item 500; fig. 6, item 600; fig. 12 item 1200; column 5, lines 31-44), comprising: a radar-based tracking system (fig. 6, item 650 and 660 = items 510(A) and 510(B) in fig. 5); an image-based tracking system (column 23, lines 27-32; “augmented-reality system 1200 and/or virtual-reality system 1300 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras”); one or more processors (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); and one or more computer-readable recording media (column 21, lines 38-42; “These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 1200”); the system configured to: obtain, via the radar-based tracking system, radar-based measurement data (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); utilize the radar-based measurement data as input to an event detection module to generate event detection output (column 3, lines 43-47 and lines 60-67); wherein the event detection output indicating a pose of the object (column 3, lines 5-10, “The disclosed radar systems may determine the range of a variety of types of targets. In one example, a radar system may determine the range of passive targets (e.g., targets that simply reflect signals and do not actively transmit signals). Examples of passive targets may include a body part of a user”; column 4, lines 8-15, “these radar systems may be utilized in applications involving the control of an apparatus (such as an electronic device, a data input mechanism, a piece of machinery, a vehicle, etc.) using one or more body parts or gestures”); wherein the low-bandwidth chirp comprises a higher transmission power than the high-bandwidth chirp (column 11, line 65 to column 12, line 3). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo1, Zhang, and Kumar Y.B., wherein the low-bandwidth chirp comprises a higher transmission power than the high-bandwidth chirp, as taught by DeSalvo2 so as to reduce attenuation of the frequency-modulated radar signal (column 11, line 61 to column 12, line 13). With respect to Claim 12, claim 1 is incorporated, DeSalvo1 teaches further comprising one or more additional radar-based tracking systems (column 4, lines 16-48), wherein the radar-based tracking system and each of the one or more additional radar-based tracking systems is associated with a respective detection region (column 18, lines 22-28, low-bandwidth is associated with a detection region that is farther away, high-bandwidth is associated with a detection region that is closer). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo1, Zhang, and Kumar Y.B. as applied to claim 1 above, and further in view of Amini et al. (Pub. No.: US 2024/0406415 A1) hereinafter referred to as Amini, and DeSalvo2. With respect to Claim 3, claim 1 is incorporated, DeSalvo1 does not explicitly teach wherein the one or more conditions comprise the event detection output indicating the object being within, in proximity to, or approaching a range of perception of the image-based tracking system. Amini teaches a system (fig. 1), comprising: a radar-based tracking system (fig. 1, item 140; ¶43); an image-based tracking system (fig. 1, item 105; ¶42-43); one or more processors (figs. 1 and 6, item 130; fig. 6, item 605; ¶75); and one or more computer-readable recording media (fig. 6, item 610; ¶75) that stores instructions that are executable by the one or more processors to configured the system to: obtain, via the radar-based tracking system, radar-based measurement data; utilize the radar-based measurement data as input to an event detection module to generate even detection output (¶47-48); and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object (¶47-48); wherein the one or more conditions comprise the event detection output indicating a pose of the object being within, in proximity to, or approaching a range of perception of the image-based tracking system (¶47-48). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of DeSalvo1, Zhang, and Kumar Y.B., wherein the one or more conditions comprise the event detection output indicating the object being within, in proximity to, or approaching a range of perception of the image-based tracking system, as taught by Amini so as to reduce resource usage (¶38). DeSalvo1, Zhang, Kumar Y.B., and Amini combined do not mention the event detection output indicating a pose of the object. DeSalvo2 teaches a system (fig. 5, item 500; fig. 6, item 600; fig. 12 item 1200; column 5, lines 31-44), comprising: a radar-based tracking system (fig. 6, item 650 and 660 = items 510(A) and 510(B) in fig. 5); an image-based tracking system (column 23, lines 27-32; “augmented-reality system 1200 and/or virtual-reality system 1300 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras”); one or more processors (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); and one or more computer-readable recording media (column 21, lines 38-42; “These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 1200”); the system configured to: obtain, via the radar-based tracking system, radar-based measurement data (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); utilize the radar-based measurement data as input to an event detection module to generate event detection output (column 3, lines 43-47 and lines 60-67); wherein the event detection output indicating a pose of the object (column 3, lines 5-10, “The disclosed radar systems may determine the range of a variety of types of targets. In one example, a radar system may determine the range of passive targets (e.g., targets that simply reflect signals and do not actively transmit signals). Examples of passive targets may include a body part of a user”; column 4, lines 8-15, “these radar systems may be utilized in applications involving the control of an apparatus (such as an electronic device, a data input mechanism, a piece of machinery, a vehicle, etc.) using one or more body parts or gestures”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo1, Zhang, Kumar Y.B., and Amini, wherein the event detection output indicating a pose of the object, as taught by DeSalvo2 so as to facilitate modification of one or more virtual component (column 3, lines 60-67). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo1, Zhang, and Kumar Y.B. as applied to claim 6 above, and further in view of Johnson (Pub. No.: US 2020/0025907 A1). With respect to claim Claim 7, claim 6 is incorporated, DeSalvo1, Zhang, and Kumar Y.B. combined do not teach wherein the single radar transmitter consumes less than 10 milliwatts to emit the multi-chirp FMCW radar signal. Johnson teaches a radar-based tracking system, wherein the radar-based tracking system is configured to consumes less than 10 milliwatts to emit the multi-chirp FMCW radar signal (¶43). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo1, Zhang, and Kumar Y.B., wherein the single radar transmitter consumes less than 10 milliwatts to emit the multi-chirp FMCW radar signal, as taught by Johnson so as to provide a low power consumption radar-based tracking system. Claims 11 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo1, Zhang, and Kumar Y.B. as applied to claim 1 above, and further in view of Kesaraju et al. (Pub. No.: US 2024/0027608 A1) hereinafter referred to as Kesaraju. With respect to Claim 11, claim 1 is incorporated, DeSalvo1 teaches transponders disposed on a user’s hand (column 20, lines 15-18) and that transducers that are closer to the system have a high-bandwidth and that transducers that are farther from the system have a low-bandwidth (column 18, lines 22-28). DeSalvo1 does not mention wherein the high-bandwidth event detection output is associated with closer objects, and wherein the low-bandwidth event detection output is associated with farther objects. Kumar Y.B. teaches a system (fig. 6; ¶24), comprising: a radar-based tracking system configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (fig. 5, item B2: high-bandwidth; ¶22; ¶24) and a low-bandwidth chirp (fig. 5, item B1: low-bandwidth; ¶22; ¶24); and one or more processors (fig. 6, items 602 and 606), the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output (¶25, “The processing unit 606 may also include functionality to perform post processing of information about the detected objects, such as tracking objects, determining rate and direction of movement, etc”), wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp (fig. 7; ¶23, “Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”), and wherein the high-bandwidth event detection output is associated with closer objects (¶22, “Chirp configuration 2 illustrates a chirp with a higher bandwidth B2 than configuration 1, which provides higher accuracy for detecting closer objects at higher range resolutions” – a threshold distance, is the distance for detecting closer objects at higher resolutions corresponding to bandwidth B2) and the low-bandwidth event detection output is associated with farther objects (¶22, “Chirp configuration 1 illustrates a typical chirp at a bandwidth B1 that may be repeated multiple times to capture distance, velocity and angle of arrival of objects” – outside of the threshold distance, is the distance for detecting farther objects corresponding to bandwidth B1). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of modify the combined system of DeSalvo1 and Zhang, to utilize the transmitters and receiver (fig. 6, item 604; ¶24) of Kumar Y.B. in place of the transmitters, receivers, and responders of DeSalvo1, since the transmitter and receiver of Kumar Y.B. uses a high-bandwidth to detect closer objects and uses a low-bandwidth to detect farther objects resulting in wherein the high-bandwidth event detection output is associated with closer objects, and wherein the low-bandwidth event detection output is associated with farther objects, as taught by Kumar Y.B. so as to reduce the time needed to extract object information (¶23). DeSalvo1, Zhang, and Kumar Y.B. combined do not explicitly mention closer objects within a threshold distance correspond to one or more shoulders, elbows, or hands of a user or wherein the low-bandwidth event detection output is associated with one or more legs or feet of the user corresponding to farther objects. Kesaraju teaches a system (fig. 1; ¶24), comprising: a radar-based system configured to emit a frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (¶25); wherein the high-bandwidth event detection output is associated with one or more shoulders, elbows, or hands of a user (¶26-27). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo1, Zhang, and Kumar Y.B., such that closer objects within a threshold distance corresponds to a hand, such that objects that are closer correspond to one or more shoulders, elbows, or hands of a use and farther objects correspond to one or more legs or feet of the user, as taught by Kesaraju so as to define boundaries for object detection. With respect to Claim 21, claim 1 is incorporated, DeSalvo1 teaches transponders disposed on a user’s hand (column 20, lines 15-18) and that transducers that are closer to the system have a high-bandwidth and that transducers that are farther from the system have a low-bandwidth (column 18, lines 22-28). DeSalvo1, Zhang, and Kumar Y.B. combined do not mention wherein the threshold distance to the radar-based tracking system is about 1 meter. Kesaraju teaches a system (fig. 1; ¶24), comprising: a radar-based system configured to emit a frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (¶25); wherein a threshold distance to the radar-based tracking system is about 1 meter (¶26). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo1, Zhang, and Kumar Y.B., wherein the threshold distance to the radar-based tracking system is about 1 meter, as taught by Kesaraju so as to define boundaries for object detection. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo1, Zhang, and Kumar Y.B. as applied to claim 1 above, and further in view of Amini. With respect to Claim 14, claim 1 is incorporated, DeSalvo1, Zhang, and Kumar Y.B. combined do not teach wherein the instructions are executable by the one or more processors to configure the system to: after selectively activating the image-based tracking system, and when the event detection output fails to satisfy the one or more conditions, selectively deactivate the image-based tracking system. Amini teaches a system (fig. 1), comprising: a radar-based tracking system (fig. 1, item 140; ¶43); an image-based tracking system (fig. 1, item 105; ¶42-43); one or more processors (figs. 1 and 6, item 130; fig. 6, item 605; ¶75); and one or more computer-readable recording media (fig. 6, item 610; ¶75) that stores instructions that are executable by the one or more processors to configured the system to: obtain, via the radar-based tracking system, radar-based measurement data; utilize the radar-based measurement data as input to an event detection module to generate even detection output (¶47-48); and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object (¶47; ¶48, “In this example, if radar sensor 140 (or another type of supplemental sensor) detects motion, then this can be prioritized by base station 130 and used by base station 130 to provide data to camera 105 to begin recording”); wherein the instructions are executable by the one or more processors to configure the system to: after selectively activating the image-based tracking system (¶47, “However, if radar sensor 140 does not detect motion, but the IR sensor of camera 105 does detect motion, then base station 130 might not provide video 125 or motion notification 150 to cloud server 155 because this can be an indication of a false positive regarding the motion that was determined by the IR sensor to be occurring”), and when the event detection output fails to satisfy the one or more conditions, selectively deactivate the image-based tracking system (¶47-48, when either sensor fails to detect motion, “radar sensor 140 can detect motion within field of vision 110, but the IR sensor of camera 105 might not detect motion and, therefore, video might not be recorded using the image sensor of camera 105”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo1, Zhang, and Kumar Y.B. wherein the instructions are executable by the one or more processors to configure the system to: such that both the radar-based tracking system and the image-based tracking system must satisfy the one or more conditions for event detection output resulting in after selectively activating the image-based tracking system, and when the event detection output fails to satisfy the one or more conditions, selectively deactivate the image-based tracking system, as taught by Amini so as to reduce false positives (¶47). Claims 15, 17-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo2 in view of Woods et al. (Pub. No.: US 2018/0300897 A1), in view of Zhang, in view of DeSalvo1, and in view of Kumar Y.B. With respect to Claim 15, DeSalvo2 teaches a system (fig. 5, item 500; fig. 6, item 600; fig. 12, item 1200; column 5, lines 31-44), comprising: a radar-based tracking system configured to emit a frequency modulated continuous wave (FMCW) radar signal (fig. 6, item 650 and 660 =items 510(A) and 510(B) in fig. 5; column 2, lines 28-36); a pose tracking system (column 24, lines 20-47; pose tracking system= SLAM location identifying techniques); an image-based tracking system (column 23, lines 27-32; “augmented-reality system 1200 and/or virtual-reality system 1300 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras”); one or more processors (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); and one or more computer-readable recording media (column 21, lines 38-42; “These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 1200”); the system configured to: obtain, via the radar-based tracking system, radar-based measurement data (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”), wherein the radar-based measurement data comprises or is based on a reflected FMCW radar signal (column 2, lines 41-44; column 3, lines 5-8); obtain, via the pose tracking system, system pose data (column 24, lines 29-35); utilize the radar-based measurement data and the system pose data as input to an event detection module to generate event detection output (column 3, lines 43-47 and lines 60-67). DeSalvo2 teaches the pose tracking system uses many different types of sensors to create a map and determine a user’s position within the map (column 24, lines 26-28) and may implement radios including WiFi, Bluetooth, global positioning system (GPS), cellular or other communication devices may be also used to determine a user's location relative to a radio transceiver or group of transceivers (column 24, lines 30-35), however does not mention that the pose tracking system comprises an inertial tracking system. Woods teaches a system (fig. 8; ¶132), comprising: a radar emitter or detector (fig. 8, item 108); an inertial tracking system (fig. 8, item 102); an image-based tracking system (fig. 8, item 106: LiDAR emitter or detector, item 124: camera); a processor (fig. 8, item 128) configured to: obtain, via the inertial tracking system, system pose data (¶127, “Several other changes may be made when using the electromagnetic tracking system for AR devices. Although this pose reporting rate is rather good, AR systems may require an even more efficient pose reporting rate. To this end, IMU-based pose tracking may be used in the sensors”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of DeSalvo2, such that the electromagnetic tracking system which is a communication device and the inertial tracking system is also implemented in the pose tracking system of DeSalvo2, as taught by Woods to provide more efficient pose reporting (¶127). DeSalvo2 and Woods combined do not explicitly mention one or more computer-readable recording media stores instructions that are executable by the one or more processors nor does DeSalvo2 and Woods combined teach and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object. Zhang teaches a system (fig. 1, item 170 of user device 107 = item 200 of fig. 2; ¶46, “the user device 107 can include a mobile phone, router, tablet computer, laptop computer, tracking device, wearable device (e.g., a smart watch, glasses, an XR device, etc.)”; ¶54), comprising: a radar-based tracking system (fig. 2, items 204, 206, 208, and 210; ¶55-56; ¶62, “TX waveform 216 can include a chirp signal, as used, for example, in a Frequency-Modulated Continuous-Wave (FM-CW) radar system”) configured to emit a high-bandwidth chirp (¶107, “In some cases, the device can implement a high-resolution RF sensing algorithm (e.g., with a high bandwidth, a high number of spatial links, and a high sampling rate as compared to the mid-resolution RF sensing algorithm). The high-resolution RF sensing algorithm can differ from the mid-resolution RF sensing algorithm by having a higher bandwidth, a higher number of spatial links, a higher sampling rate, or any combination thereof”) and a low-bandwidth chirp (¶90, “the device can adjust the level of RF sensing resolution by modifying the number of spatial links (e.g., adjusting number of spatial streams and/or number of receive antennas) as well as the bandwidth and the sampling frequency. In some cases, the device can implement a low-resolution RF sensing algorithm (e.g., with a relatively low bandwidth, low number of spatial links, and low sampling rate), which consumes a small amount of power and can operate in the background when the device is in the locked or sleep state”); an image based tracking system (fig. 1, item 172) comprising an image sensors and a processing module (¶119, “In another example, input image 802 can be obtained by a camera (e.g., input devices 172) of the wireless device. In another example, input image 802 can be obtained by using a LIDAR sensor (e.g., communications interface 1240) of the wireless device” – LiDAR sensor comprises a processing module); one or more processors (fig. 1, item 184; ¶47); and a computer-readable recording media (fig. 1, item 186; ¶52-53) that store instructions that are executable by the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output and when the event detection output satisfies one or more conditions (¶90, “the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms… the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms”), selectively activate the image-based tracking system (¶92, “If motion is detected at block 406, the process 400 can proceed to block 410 and initiate facial authentication. In some examples, facial recognition can be performed by using an RF interface that is capable of transmitting extremely high frequency (EHF) signals or mmWave technology”; ¶94, “the device may use an infrared (IR) light source, dot projector, or other light source to illuminate a user's face and an IR camera or other image capture device to perform image capture”) to enable acquisition of image-based tracking data to facilitate positional tracking of an object (¶121, “several images can be captured of the owner or user with different poses, positions, facial expressions, lighting conditions, and/or other characteristics”; ¶123-125). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of DeSalvo2 and Woods, such that the one or more computer-readable recording media stores instructions that are executable by the one or more processors and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object, as taught by Zhang so as to reduce resource usage (¶35). DeSalvo2, Woods, and Zhang combined do not mention the system configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp and a low-bandwidth chirp; wherein the radar-based measurement data comprises or is based on a reflected multi-chirp FMCW radar signal comprising a reflected high-bandwidth chirp and a reflected low-bandwidth chirp. DeSalvo1 teaches a system (fig. 11, item 1100 or fig. 12, item 1200; column 25, lines 15-26), comprising: a radar-based tracking system (column 2, lines 39-52) configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp and a low-bandwidth chirp (fig. 6, items 601(1), 601(3), 601(n): high bandwidth; items 601(2) and 601(4): low bandwidth; column 17, lines 41-51; column 18, lines 23-28); an image-based tracking system (column 28, line 66 to column 29, line 11); one or more processors (column 3, lines 61-66); and one or more computer-readable recording media (column 17, lines 30-37) that store instructions that are executable by the one or more processors to configure the system to: obtain, via the radar-based tracking system, radar-based measurement data, wherein the radar-based measurement data comprises or is based on a reflected multi-chirp FMCW radar signal comprising a reflected high-bandwidth chirp and a reflected low-bandwidth chirp (column 2, lines 39-62; column 16, lines 55-62; column 19, lines 27-38). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo2, Woods, and Zhang, such that the system is configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp and a low-bandwidth chirp; wherein the radar-based measurement data comprises or is based on a reflected multi-chirp FMCW radar signal comprising a reflected high-bandwidth chirp and a reflected low-bandwidth chirp, as taught by DeSalvo1 so as to facilitate efficiently, precisely, and/or quickly tracking the movement of wearable artificial reality devices (column 4, lines 1-3). DeSalvo2, Woods, Zhang, and DeSalvo1 combined do not explicitly mention wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp, and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance. Kumar Y.B. teaches a system (fig. 6; ¶24), comprising: a radar-based tracking system configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (fig. 5, item B2: high-bandwidth; ¶22; ¶24) and a low-bandwidth chirp (fig. 5, item B1: low-bandwidth; ¶22; ¶24); and one or more processors (fig. 6, items 602 and 606), the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output (¶25, “The processing unit 606 may also include functionality to perform post processing of information about the detected objects, such as tracking objects, determining rate and direction of movement, etc”), wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp (fig. 7; ¶23, “Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”), and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system (¶22, “Chirp configuration 2 illustrates a chirp with a higher bandwidth B2 than configuration 1, which provides higher accuracy for detecting closer objects at higher range resolutions” – a threshold distance, is the distance for detecting closer objects at higher resolutions corresponding to bandwidth B2) and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance (¶22, “Chirp configuration 1 illustrates a typical chirp at a bandwidth B1 that may be repeated multiple times to capture distance, velocity and angle of arrival of objects” – outside of the threshold distance, is the distance for detecting farther objects corresponding to bandwidth B1). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of modify the combined system of DeSalvo2, Woods, Zhang, and DeSalvo1, to utilize the transmitters and receiver (fig. 6, item 604; ¶24) of Kumar Y.B. in place of the transmitters, receivers, and responders of DeSalvo1, resulting in wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp, and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance, as taught by Kumar Y.B. so as to prevent losing important information about objects within view of the radar (¶23). With respect to Claim 17, claim 15 is incorporated, DeSalvo2, Woods, Zhang, and DeSalvo1 combined do not teach wherein the high-bandwidth chirp and the low-bandwidth chirp are interleaved to form the multi-chirp FMCW radar signal. Kumar Y.B. teaches a system (fig. 6; ¶24), comprising: a radar-based tracking system configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (fig. 5, item B2: high-bandwidth; ¶22; ¶24) and a low-bandwidth chirp (fig. 5, item B1: low-bandwidth; ¶22; ¶24); and one or more processors (fig. 6, items 602 and 606), the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output (¶25, “The processing unit 606 may also include functionality to perform post processing of information about the detected objects, such as tracking objects, determining rate and direction of movement, etc”), wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp (fig. 7; ¶23, “Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”), and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system (¶22, “Chirp configuration 2 illustrates a chirp with a higher bandwidth B2 than configuration 1, which provides higher accuracy for detecting closer objects at higher range resolutions” – a threshold distance, is the distance for detecting closer objects at higher resolutions corresponding to bandwidth B2) and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance (¶22, “Chirp configuration 1 illustrates a typical chirp at a bandwidth B1 that may be repeated multiple times to capture distance, velocity and angle of arrival of objects” – outside of the threshold distance, is the distance for detecting farther objects corresponding to bandwidth B1); wherein the high-bandwidth chirp and the low-bandwidth chirp are interleaved to form the multi-chirp FMCW radar signal (figs. 6-7; ¶23, “In the prior art, chirp configurations such as these may be applied in different frames, which may result in losing important information about objects within view of the radar due to delays caused by using multiple frames to extract the needed information. Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of modify the combined system of DeSalvo2, Woods, Zhang, and DeSalvo1, to utilize the transmitters and receiver (fig. 6, item 604; ¶24) of Kumar Y.B. in place of the transmitters, receivers, and responders of DeSalvo1, resulting in wherein the high-bandwidth chirp and the low-bandwidth chirp are interleaved to form the multi-chirp FMCW radar signal, as taught by Kumar Y.B. so as to reduce the time needed to extract object information (¶23). With respect to Claim 18, claim 15 is incorporated, DeSalvo2, Woods, DeSalvo1, and Kumar Y.B. combined do not teach wherein the instructions are executable by the one or more processors to configure the system to: when the event detection output fails to satisfy the one or more conditions, selectively refrain from activating the image-based tracking system. Zhang teaches a system (fig. 1, item 170 of user device 107 = item 200 of fig. 2; ¶46, “the user device 107 can include a mobile phone, router, tablet computer, laptop computer, tracking device, wearable device (e.g., a smart watch, glasses, an XR device, etc.)”; ¶54), comprising: a radar-based tracking system (fig. 2, items 204, 206, 208, and 210; ¶55-56; ¶62, “TX waveform 216 can include a chirp signal, as used, for example, in a Frequency-Modulated Continuous-Wave (FM-CW) radar system”) configured to emit a high-bandwidth chirp (¶107, “In some cases, the device can implement a high-resolution RF sensing algorithm (e.g., with a high bandwidth, a high number of spatial links, and a high sampling rate as compared to the mid-resolution RF sensing algorithm). The high-resolution RF sensing algorithm can differ from the mid-resolution RF sensing algorithm by having a higher bandwidth, a higher number of spatial links, a higher sampling rate, or any combination thereof”) and a low-bandwidth chirp (¶90, “the device can adjust the level of RF sensing resolution by modifying the number of spatial links (e.g., adjusting number of spatial streams and/or number of receive antennas) as well as the bandwidth and the sampling frequency. In some cases, the device can implement a low-resolution RF sensing algorithm (e.g., with a relatively low bandwidth, low number of spatial links, and low sampling rate), which consumes a small amount of power and can operate in the background when the device is in the locked or sleep state”); an image based tracking system (fig. 1, item 172) comprising an image sensors and a processing module (¶119, “In another example, input image 802 can be obtained by a camera (e.g., input devices 172) of the wireless device. In another example, input image 802 can be obtained by using a LIDAR sensor (e.g., communications interface 1240) of the wireless device” – LiDAR sensor comprises a processing module); one or more processors (fig. 1, item 184; ¶47); and a computer-readable recording media (fig. 1, item 186; ¶52-53) that store instructions that are executable by the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output and when the event detection output satisfies one or more conditions (¶90, “the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms… the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms”), selectively activate the image-based tracking system (¶92, “If motion is detected at block 406, the process 400 can proceed to block 410 and initiate facial authentication. In some examples, facial recognition can be performed by using an RF interface that is capable of transmitting extremely high frequency (EHF) signals or mmWave technology”; ¶94, “the device may use an infrared (IR) light source, dot projector, or other light source to illuminate a user's face and an IR camera or other image capture device to perform image capture”) to enable acquisition of image-based tracking data to facilitate positional tracking of an object (¶121, “several images can be captured of the owner or user with different poses, positions, facial expressions, lighting conditions, and/or other characteristics”; ¶123-125); wherein the instructions are executable by the one or more processors to configure the system to: when the event detection output fails to satisfy the one or more conditions, selectively refrain from activating the image-based tracking system (¶91). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of DeSalvo2 and Woods, wherein the instructions are executable by the one or more processors to configure the system to: when the event detection output fails to satisfy the one or more conditions, selectively refrain from activating the image-based tracking system, as taught by Zhang so as to reduce resource usage (¶35). With respect to Claim 20, DeSalvo2 teaches a head-mounted display (fig. 5, item 500; fig. 6, item 600; fig. 12 item 1200; column 5, lines 31-44), comprising: a plurality of radar-based tracking systems (fig. 6, item 650 and 660 = items 510(A) and 510(B) in fig. 5; column 4, lines 16-27 and lines 28-48), wherein at least one of the plurality of radar-based tracking systems is configured to emit a frequency modulated continuous wave (FMCW) radar signal (fig. 6, item 650 and 660 =items 510(A) and 510(B) in fig. 5; column 2, lines 28-36); a simultaneous localization and mapping system (column 24, lines 20-47; SLAM location identifying techniques); an image-based tracking system (column 23, lines 27-32; “augmented-reality system 1200 and/or virtual-reality system 1300 may include one or more optical sensors, such as two-dimensional (2D) or 3D cameras”); one or more processors (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); and one or more computer-readable recording media (column 21, lines 38-42; “These other devices may provide certain functions (e.g., tracking, localizing, depth mapping, processing, storage, etc.) to augmented-reality system 1200”); the head-mounted display configured to: obtain, via the radar-based tracking system, radar-based measurement data (column 12, lines 47-55, “In addition, wearable device 600 may include a processing device that directs, controls, and/or receives input from one or more radar devices secured to wearable device 600”); obtain, via the simultaneous localization and mapping system, system pose data (column 24, lines 29-35); utilize the radar-based measurement data as input to an event detection module to generate event detection output (column 3, lines 43-47 and lines 60-67). DeSalvo2 teaches many different types of sensors are used to create a map and determine a user’s position within the map (column 24, lines 26-28) and may implement radios including WiFi, Bluetooth, global positioning system (GPS), cellular or other communication devices may be also used to determine a user's location relative to a radio transceiver or group of transceivers (column 24, lines 30-35), however does not mention that wherein the system pose data comprises 6-degree-of-freedom pose data for the head-mounted display. Woods teaches a system (fig. 8; ¶132), comprising: a radar emitter or detector (fig. 8, item 108); a simultaneous localization and mapping system; an inertial tracking system (fig. 8, item 102); an image-based tracking system (fig. 8, item 106: LiDAR emitter or detector, item 124: camera); a processor (fig. 8, item 128) configured to: obtain, via the inertial tracking system, system pose data (¶127, “Several other changes may be made when using the electromagnetic tracking system for AR devices. Although this pose reporting rate is rather good, AR systems may require an even more efficient pose reporting rate. To this end, IMU-based pose tracking may be used in the sensors”); obtain, via the simultaneous localization and mapping system, system pose data (¶195), wherein the system pose data comprises 6-degree-of-freedom pose data for the head-mounted display (¶178-179). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of DeSalvo2, such that the electromagnetic tracking system which is a communication device and the inertial tracking system is also implemented in the pose tracking system of DeSalvo2, as taught by Woods, along with other sensors resulting in wherein the system pose data comprises 6-degree-of-freedom pose data for the head-mounted display to improve efficiency, performance, and precision (¶123). DeSalvo2 and Woods combined do not explicitly mention one or more computer-readable recording media stores instructions that are executable by the one or more processors nor does DeSalvo2 and Woods teach and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object. Zhang teaches a system (fig. 1, item 170 of user device 107 = item 200 of fig. 2; ¶46, “the user device 107 can include a mobile phone, router, tablet computer, laptop computer, tracking device, wearable device (e.g., a smart watch, glasses, an XR device, etc.)”; ¶54), comprising: a radar-based tracking system (fig. 2, items 204, 206, 208, and 210; ¶55-56; ¶62, “TX waveform 216 can include a chirp signal, as used, for example, in a Frequency-Modulated Continuous-Wave (FM-CW) radar system”) configured to emit a high-bandwidth chirp (¶107, “In some cases, the device can implement a high-resolution RF sensing algorithm (e.g., with a high bandwidth, a high number of spatial links, and a high sampling rate as compared to the mid-resolution RF sensing algorithm). The high-resolution RF sensing algorithm can differ from the mid-resolution RF sensing algorithm by having a higher bandwidth, a higher number of spatial links, a higher sampling rate, or any combination thereof”) and a low-bandwidth chirp (¶90, “the device can adjust the level of RF sensing resolution by modifying the number of spatial links (e.g., adjusting number of spatial streams and/or number of receive antennas) as well as the bandwidth and the sampling frequency. In some cases, the device can implement a low-resolution RF sensing algorithm (e.g., with a relatively low bandwidth, low number of spatial links, and low sampling rate), which consumes a small amount of power and can operate in the background when the device is in the locked or sleep state”); an image based tracking system (fig. 1, item 172) comprising an image sensors and a processing module (¶119, “In another example, input image 802 can be obtained by a camera (e.g., input devices 172) of the wireless device. In another example, input image 802 can be obtained by using a LIDAR sensor (e.g., communications interface 1240) of the wireless device” – LiDAR sensor comprises a processing module); one or more processors (fig. 1, item 184; ¶47); and a computer-readable recording media (fig. 1, item 186; ¶52-53) that store instructions that are executable by the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output and when the event detection output satisfies one or more conditions (¶90, “the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms… the device can perform motion detection by configuring an RF interface to utilize a single spatial link to transmit a signal having a bandwidth of approximately 20 MHz and by utilizing a sampling rate that can be in the range of 100 ms to 500 ms”), selectively activate the image-based tracking system (¶92, “If motion is detected at block 406, the process 400 can proceed to block 410 and initiate facial authentication. In some examples, facial recognition can be performed by using an RF interface that is capable of transmitting extremely high frequency (EHF) signals or mmWave technology”; ¶94, “the device may use an infrared (IR) light source, dot projector, or other light source to illuminate a user's face and an IR camera or other image capture device to perform image capture”) to enable acquisition of image-based tracking data to facilitate positional tracking of an object (¶121, “several images can be captured of the owner or user with different poses, positions, facial expressions, lighting conditions, and/or other characteristics”; ¶123-125). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of DeSalvo2 and Woods, such that the one or more computer-readable recording media stores instructions that are executable by the one or more processors and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object, as taught by Zhang so as to reduce resource usage (¶35). DeSalvo2, Woods, and Zhang combined do not mention the each of the plurality of radar-based tracking systems being associated with a respective detection region, wherein at least one of the plurality of radar-based tracking systems is configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp and a low-bandwidth chirp; wherein the radar-based measurement data comprises or is based on a reflected multi-chirp FMCW radar signal comprising a reflected high-bandwidth chirp and a reflected low-bandwidth chirp. DeSalvo1 teaches a head-mounted display (fig. 11, item 1100 or fig. 12, item 1200; column 25, lines 15-26), comprising: a plurality of radar-based tracking systems (column 2, lines 39-52; column 4, lines 15-17), each of the plurality of radar-based tracking systems being associated with a respective detection region (column 18, lines 22-28, low-bandwidth is associated with a detection region that is farther away, high-bandwidth is associated with a detection region that is closer), wherein at least one of the plurality of radar-based tracking systems is configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp and a low-bandwidth chirp (fig. 6, items 601(1), 601(3), 601(n): high bandwidth; items 601(2) and 601(4): low bandwidth; column 17, lines 41-51; column 18, lines 23-28); an image-based tracking system (column 28, line 66 to column 29, line 11); one or more processors (column 3, lines 61-66); and one or more computer-readable recording media (column 17, lines 30-37) that store instructions that are executable by the one or more processors to configure the system to: obtain, via the radar-based tracking system, radar-based measurement data, wherein the radar-based measurement data comprises or is based on a reflected multi-chirp FMCW radar signal comprising a reflected high-bandwidth chirp and a reflected low-bandwidth chirp (column 2, lines 39-62; column 16, lines 55-62; column 19, lines 27-38). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined head-mounted display of DeSalvo2, Woods, and Zhang, such that each of the plurality of radar-based tracking systems being associated with a respective detection region, wherein at least one of the plurality of radar-based tracking systems is configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp and a low-bandwidth chirp; wherein the radar-based measurement data comprises or is based on a reflected multi-chirp FMCW radar signal comprising a reflected high-bandwidth chirp and a reflected low-bandwidth chirp, as taught by DeSalvo1 so as to facilitate efficiently, precisely, and/or quickly tracking the movement of wearable artificial reality devices (column 4, lines 1-3). DeSalvo2, Woods, Zhang, and DeSalvo1 combined do not explicitly mention wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp, and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance. Kumar Y.B. teaches a system (fig. 6; ¶24), comprising: a radar-based tracking system configured to emit a multi-chirp frequency modulated continuous wave (FMCW) radar signal comprising a high-bandwidth chirp (fig. 5, item B2: high-bandwidth; ¶22; ¶24) and a low-bandwidth chirp (fig. 5, item B1: low-bandwidth; ¶22; ¶24); and one or more processors (fig. 6, items 602 and 606), the one or more processors to configure the system to: utilize the radar-based measurement data to generate event detection output (¶25, “The processing unit 606 may also include functionality to perform post processing of information about the detected objects, such as tracking objects, determining rate and direction of movement, etc”), wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp (fig. 7; ¶23, “Embodiments of the disclosure provide for using multiple chirp profiles in a single frame, which, with an appropriate combination of profiles, may reduce the time needed to extract object information”), and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system (¶22, “Chirp configuration 2 illustrates a chirp with a higher bandwidth B2 than configuration 1, which provides higher accuracy for detecting closer objects at higher range resolutions” – a threshold distance, is the distance for detecting closer objects at higher resolutions corresponding to bandwidth B2) and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance (¶22, “Chirp configuration 1 illustrates a typical chirp at a bandwidth B1 that may be repeated multiple times to capture distance, velocity and angle of arrival of objects” – outside of the threshold distance, is the distance for detecting farther objects corresponding to bandwidth B1). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of modify the combined head-mounted display of DeSalvo2, Woods, Zhang, and DeSalvo1, to utilize the transmitters and receiver (fig. 6, item 604; ¶24) of Kumar Y.B. in place of the transmitters, receivers, and responders of DeSalvo1, resulting in wherein the event detection output comprises high-bandwidth event detection output generated based on the reflected high-bandwidth chirp and low-bandwidth event detection output generated based on the reflected low-bandwidth chirp, and wherein the high-bandwidth event detection output is associated with event occurrence within a threshold distance to the radar-based tracking system and the low-bandwidth event detection output is associated with event occurrence outside of the threshold distance, as taught by Kumar Y.B. so as to prevent losing important information about objects within view of the radar (¶23). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B. as applied to claim 15 above, and further in view of Amini. With respect to Claim 19, claim 15 is incorporated, DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B. wherein the instructions are executable by the one or more processors to configure the system to: after selectively activating the image-based tracking system, and when the event detection output fails to satisfy the one or more conditions, selectively deactivate the image-based tracking system. Amini teaches a system (fig. 1), comprising: a radar-based tracking system (fig. 1, item 140; ¶43); an image-based tracking system (fig. 1, item 105; ¶42-43); one or more processors (figs. 1 and 6, item 130; fig. 6, item 605; ¶75); and one or more computer-readable recording media (fig. 6, item 610; ¶75) that stores instructions that are executable by the one or more processors to configured the system to: obtain, via the radar-based tracking system, radar-based measurement data; utilize the radar-based measurement data as input to an event detection module to generate even detection output (¶47-48); and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object (¶47; ¶48, “In this example, if radar sensor 140 (or another type of supplemental sensor) detects motion, then this can be prioritized by base station 130 and used by base station 130 to provide data to camera 105 to begin recording”); wherein the instructions are executable by the one or more processors to configure the system to: after selectively activating the image-based tracking system (¶47, “However, if radar sensor 140 does not detect motion, but the IR sensor of camera 105 does detect motion, then base station 130 might not provide video 125 or motion notification 150 to cloud server 155 because this can be an indication of a false positive regarding the motion that was determined by the IR sensor to be occurring”), and when the event detection output fails to satisfy the one or more conditions, selectively deactivate the image-based tracking system (¶47-48, when either sensor fails to detect motion, “radar sensor 140 can detect motion within field of vision 110, but the IR sensor of camera 105 might not detect motion and, therefore, video might not be recorded using the image sensor of camera 105”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B., wherein the instructions are executable by the one or more processors to configure the system to: after selectively activating the image-based tracking system, and when the event detection output fails to satisfy the one or more conditions, selectively deactivate the image-based tracking system, as taught by Amini so as to reduce resource usage (¶38). Claims 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B. as applied to claim 15 above, and further in view of Rimini et al. (Pub. No.: US 2021/0055385 A1) hereinafter referred to as Rimini. With respect to Claim 22, claim 15 is incorporated, DeSalvo2 teaches wherein the radar-based measurement data comprises first object pose data for a first object within the threshold distance to the radar-based tracking system and second object pose data for a second object outside of the threshold distance (column 3, lines 5-10, “The disclosed radar systems may determine the range of a variety of types of targets. In one example, a radar system may determine the range of passive targets (e.g., targets that simply reflect signals and do not actively transmit signals). Examples of passive targets may include a body part of a user”; column 4, lines 8-15, “these radar systems may be utilized in applications involving the control of an apparatus (such as an electronic device, a data input mechanism, a piece of machinery, a vehicle, etc.) using one or more body parts or gestures”; column 18, lines 1-28). DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B. combined do not mention the radar-based measurement data comprises first object data is first object pose data or that the second object data is second object pose data Rimini teaches a system (fig. 1; ¶31), comprising: a radar-based tracking system (¶31); one or more processors (fig. 1 and 3, item 128; ¶39-41); and one or more computer-readable recording media (fig. 1, item 110; ¶35) that stores instructions that are executable by the one or more processors to configured the system to: obtain, via the radar-based tracking system, radar-based measurement data; utilize the radar-based measurement data as input to generate object detection output (¶28-29); wherein the radar-based measurement data comprises object data is object pose data (¶73, “a classifier algorithm may be established by building a large dataset (e.g., training data) including many human body parts (e.g., hands in different poses, arms, faces, etc.) as well as many non-human objects commonly encountered by electronic devices”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B., such that the radar-based measurement data comprises object data that is object pose data resulting in wherein the radar-based measurement data comprises first object pose data for a first object within the threshold distance to the radar-based tracking system and second object pose data for a second object outside of the threshold distance, as taught by Rimini so as to establish a reliable separation between sets of target objects into distinct categories (¶73). With respect to Claim 23, claim 22 is incorporated, DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B. combined does not teach wherein the first object pose data is associated with one or more shoulders, elbows, or hands of a user, and wherein the second object pose data is associated with one or more legs or feet of the user. Rimini teaches a system (fig. 1; ¶31), comprising: a radar-based tracking system (¶31); one or more processors (fig. 1 and 3, item 128; ¶39-41); and one or more computer-readable recording media (fig. 1, item 110; ¶35) that stores instructions that are executable by the one or more processors to configured the system to: obtain, via the radar-based tracking system, radar-based measurement data; utilize the radar-based measurement data as input to generate object detection output (¶28-29); wherein the radar-based measurement data comprises object data is object pose data (¶73, “a classifier algorithm may be established by building a large dataset (e.g., training data) including many human body parts (e.g., hands in different poses, arms, faces, etc.) as well as many non-human objects commonly encountered by electronic devices”); wherein the first object pose data is associated with one or more arms, faces, or hands of a user (¶73). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B., wherein the first object pose data is associated with hands, as taught by Rimini so as to establish a reliable separation between sets of target objects into distinct categories (¶73). Although Rimini does not mention and wherein the second object pose data is associated with one or more legs or feet of the user, DeSalvo2 teaches passive targets may include a body part of a user (column 3, lines 6-10, “a radar system may determine the range of passive targets (e.g., targets that simply reflect signals and do not actively transmit signals). Examples of passive targets may include a body part of a user, a wall, and/or a piece of furniture”), the body part comprising: a finger, an arm, a head, a torso, a foot, or a leg (column 25, lines 5-7). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo2, Woods, Zhang, DeSalvo1, and Kumar Y.B., wherein the second object pose data is associated with one or more legs or feet of the user, as taught by Rimini so as to establish a reliable separation between sets of target objects into distinct categories (¶73). Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over DeSalvo1, Zhang, and Kumar Y.B., as applied to claim 1 above, and further in view of Zhang et al. (Pub. No.: US 2023/0090516 A1) hereinafter referred to as Zhang2. With respect to Claim 24, claim 1 is incorporated, DeSalvo1, Zhang, and Kumar Y.B. combined do not mention further comprising a second image-based tracking system, wherein the event detection output satisfying the one or more conditions comprises the high-bandwidth event detection output, and wherein the instructions are executable by the one or more processors to configure the system to: when the low-bandwidth event detection output satisfies a second set of one or more conditions, selectively activate the second image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of a second object. Zhang2 teaches a system (fig. 2, item 204: user equipment), comprising: a radar-based tracking system; the system configured to obtain, via the radar-based tracking system, radar-based measurement data (¶5-8); utilize the radar-based measurement data to generate event detection output (¶95, “cameras could be used as an alternative to the medium and/or high-resolution Wi-Fi radar. Camera operations may be triggered after object detection by low-resolution Wi-Fi radar to save power”); and when the event detection output satisfies one or more conditions, selectively activate the image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of an object (¶95, “cameras could be used as an alternative to the medium and/or high-resolution Wi-Fi radar. Camera operations may be triggered after object detection by low-resolution Wi-Fi radar to save power”); further comprising a second image-based tracking system (¶95, “When there are cameras on both sides of the UE (as for a typical smartphone)”), wherein the event detection output satisfying the one or more conditions comprises the high bandwidth event detection output (¶95; ¶98), and wherein the instructions are executable by the one or more processors to configure the system to: when the low-bandwidth event detection output satisfies a second set of one or more conditions (¶96, “he system would start up with low-power, low-resolution Wi-Fi radar operation that runs in the background and senses the environment. Low-resolution Wi-Fi radar transmits Wi-Fi radar signals at a low bandwidth (e.g., below some bandwidth threshold) with wide (low resolution) beams (e.g., a beamwidth wider than some beamwidth threshold) and at large intervals (e.g., above some interval threshold) to provide coarse detection of objects in proximity to the Wi-Fi transceiver”), selectively activate the second image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of a second object (¶95, “Camera operations may be triggered after object detection by low-resolution Wi-Fi radar to save power”). Therefore it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the combined system of DeSalvo1, Zhang, and Kumar Y.B., to further comprise a second image-based tracking system, wherein the event detection output satisfying the one or more conditions comprises the high-bandwidth event detection output, and wherein the instructions are executable by the one or more processors to configure the system to: when the low-bandwidth event detection output satisfies a second set of one or more conditions, selectively activate the second image-based tracking system to enable acquisition of image-based tracking data to facilitate positional tracking of a second object, as taught by Zhang2 so as to reduce power consumption (¶8). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONNA V Bocar whose telephone number is (571)272-0955. The examiner can normally be reached Monday - Friday 8:30am to 5pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr A Awad can be reached at (571)272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DONNA V Bocar/ Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Apr 22, 2024
Application Filed
Apr 24, 2025
Non-Final Rejection — §103
Jun 12, 2025
Interview Requested
Jul 08, 2025
Examiner Interview Summary
Jul 08, 2025
Applicant Interview (Telephonic)
Jul 10, 2025
Response Filed
Sep 10, 2025
Final Rejection — §103
Oct 09, 2025
Interview Requested
Oct 21, 2025
Examiner Interview Summary
Oct 21, 2025
Applicant Interview (Telephonic)
Oct 27, 2025
Request for Continued Examination
Feb 27, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591297
MULTIMODAL TASK EXECUTION AND TEXT EDITING FOR A WEARABLE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12536977
BRIGHTNESS CONTROL METHOD AND APPARATUS FOR DISPLAY PANEL
2y 5m to grant Granted Jan 27, 2026
Patent 12475825
DISPLAY SUBSTRATE INCLUDING SHIFT REGISTER AND DISPLAY DEVICE
2y 5m to grant Granted Nov 18, 2025
Patent 12451088
LIQUID CRYSTAL DISPLAY DEVICE AND CONTROL MODULE THEREOF, AND INTEGRATED BOARD
2y 5m to grant Granted Oct 21, 2025
Patent 12451091
TEMPERATURE CONTROL CIRCUIT AND TEMPERATURE CONTROL METHOD OF DRIVER CHIP AND TIMING CONTROL DRIVER BOARD
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
77%
With Interview (+19.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month