Prosecution Insights
Last updated: April 19, 2026
Application No. 18/902,651

ULTRA-WIDEBAND-BASED FALL DETECTION

Non-Final OA §103
Filed
Sep 30, 2024
Examiner
KHAN, OMER S
Art Unit
2686
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
3y 0m
To Grant
95%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
325 granted / 595 resolved
-7.4% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
27 currently pending
Career history
622
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
4.8%
-35.2% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant's Representative, Mr. Lewis Schiel, was contacted on 12/10/2025. Examiner proposed to allow claims 2, 9, and 16 if incorporated into the respective independent claims, and provided the cited prior art. Representative contacted the Examiner on 12/12/2025, and requested an Office action. Allowable Subject Matter Claims 2, 9, and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-4, 8, 10-11, 15, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Dvash et al. (US 2022/0283292 A1) and further in view of Zack et al. (US 2018/0330593 A1). Consider claim 1, Dvash teaches, an electronic device (100) comprising: a transceiver (104) configured to transmit and receive [[Dvash teaches, “fall detection system may include a radar unit comprising at least one transmitter antenna connected to an oscillator and configured to transmit electromagnetic waves into a monitored region” See ¶ 0006; and a processor (126) operatively coupled to the transceiver (104), Dvash teaches, “fall detection system 100 includes a radar unit 104, a processor unit 126 and a communication module 134… radar unit 104 includes an array of transmitters 106 and receivers 110. ” See ¶ 0058-0059, and Fig. 1. the processor configured to: detect a human ( “subject person” 102, see ¶ 0052 Fig. 1) within a detection area (105) of the transceiver (104) based on the received [[Dvash teaches, a radar based passive gait speed monitor 127 for use in the subject monitoring station which is schematically represented. The gait speed monitor 127 may be operable to generate a value for the gait speed of a subject passing along an extended target zone 105” See ¶ 0062, Dvash teaches, “removing static objects from the frames of raw data 1314; transferring filtered data to the tracker module 1316, identifying moving targets in filtered data 1318” See ¶ 0077; perform a motion detection operation based on the detection of the human within the detection area of the transceiver, Dvash teaches, “a tracker module configured to receive the filtered data from the data filter and operable to process the filtered data to identify moving targets and to track the location of the moving targets over time thereby generating target data” see ¶ 0007, Dvash teaches, “[t]he radar scanning arrangement 104 is configured to monitor the movement of a subject 102 over an extended range. The extended range 105 is of dimensions suitable for the measurement of speed of sustained gait along a path of say 4-8 meters. Thus, by way of example, it may be preferred to locate a scanning arrangement to cover movement in a target zone of say 5-6 meters squared.” See ¶ 0063; Dvash teaches, “transferring target data to the fall identification module 1320; tracking the moving targets over time; assigning posture to the targets 1322; storing a posture history in a memory unit 1324;” See ¶ 0077. perform, based on a result of the motion detection operation, a fall detection operation, Dvash teaches, “applying fall detection rules 1326; and generating a fall alert 1330 if a fall is detected 1328.” See ¶ 0077; Dvash teaches, “[i]n case a fall is detected in the target region 102 based on the fall detection rules, at step 302, data corresponding to target region 102 is recorded by the receiver 110 of the radar unit 104. For each target segment of the target area 102, a current energy profile is generated by the profile generator 114 and sent to the processing unit 126 by the output unit 118 at step 304. At step 306, the current energy profile is compared with the recorded time dependent energy profile 124 stored in the database 120. Based on the comparison, it is determined if an anomaly is detected in the fall detection at step 308. In case no anomaly is detected in the fall detection, an alert is generated and provided to the intended recipients through various means at step 310. In case an anomaly is detected in the fall detection, the fall alert if filtered out and process repeats from step 304. The process completes at step 312.” See ¶ 0102. Dvash does not explicitly states, receive ultrawide band (UWB) radar signals; nonetheless, in an analogous art, Zack teaches, “a unique sensing system and a breakthrough for the supervision of the elderly during their stay in the house, in general, and detect falls, in particular. The system may include: a UWB-RF Interferometer, Vector Quantization based Human states classifier, Cognitive situation analysis, communication unit and processing unit.” See ¶ 0007, Zack teaches, “(i) transmitting, via at least one transmitting antenna, ultra-wide band (UWB) radio frequency (RF) signals at an environment including at least one human, and receiving, via at least one receiving antenna, echo signals from the environment, (ii) processing the received echo signals to yield a range-bin-based slow signal that is spatially characterized over a plurality of spatial range bins, (iii) estimating at least one respiration feature of the at least one human by analyzing the slow signal, and classifying the respiration feature(s) to indicate respiration mode(s) of the human(s).” See ¶ 0008, Zack teaches, “the system may automatically detect and alert emergency situation that might be encountered by elders while being at home and identify the emergency situations. According to some embodiments of the present invention, the system may detect falls of elderly people” See ¶ 0010-0011. Zack teaches, “unique characteristics (e.g., body features, stature, gait etc.) and the home environment, and uses a human state classifier to determine the instantaneous human state” See ¶ 0077 It would have been obvious to one of ordinary skilled in the art at the time of invention (effective filing date for AIA application) to modify the invention of Dvash and use ultra-wide band (UWB) radio frequency (RF) signals at an environment and use respiration to identify a human in addition to unique characteristics (e.g., body features, stature, gait etc.) and the home environment, and uses a human state classifier to determine the instantaneous human state therefore accurately identifying a human fall and preventing false alarms. Consider claim 3, the electronic device of claim 1, wherein: the transceiver (104) comprises at least two receive (RX) antennas (110) configured to receive the UWB radar signals, Dvash teaches, “array of receiver antennas 110 configured and operable to receive electromagnetic waves reflected by objects 102 within the monitored region 105.” See ¶ 0059; Zack teaches, “ultra-wide band (UWB) radio frequency (RF) signals… UWB Rx chain to receive the echo signals from the antenna”; See ¶ 0008 and 0087 and to perform the motion detection operation, the processor is further configured to: determine an azimuth difference in the UWB radar signals received by the at least two RX antennas; determine an angle of arrival based on the determined azimuth difference, Dvash teaches, “pre-processor unit 112 also includes a segment selector 116 configured to select a target segment of interest in the monitored region 102 by selecting radiations received within a given azimuth range (of the angles measured along the horizontal) at a given depth range measured by the time taken by reflections to arrive at the receiving antennas 110.” See ¶ 0096; and estimate a location of the human based on the angle of arrival and a range bin (“which is the distance (or holds or represents values of distance)” Specification ¶ 0079), Dvash teaches, “[t]he profile generator 114 also generates a current energy profile for each target segment of the monitored region 102 selected by the segment selector 116. An output unit 118 sends the standard energy profiles 122 and time dependent energy profiles 124 to the database 120 and the current energy profile of each target segment to the processing unit 126 for anomaly detection and filtering alerts. The output unit 118 is also configured to send the raw data received by the receiver 110 to the processing unit 126.” See ¶ 0096, Dvash teaches, “Raw data is generated by the radar module 104 which typically includes amplitude values for energy reflected at specific angles and ranges.” See ¶ 0066, Dvash teaches, “data filter 123 receives the raw data 12 directly from the radar module 104” See ¶ 0067, “filtered image data 16 may be transferred to a tracker module 125 operable to process the filtered image data 16 in order to identify moving targets with the data and to track the location of the identified moving targets” See ¶ 0069, “tracking data which may be used to calculate predicted locations 22 for each target” See ¶ 0072. Consider claim 4, the electronic device of claim 1, wherein: the fall detection operation is performed based on the result of the motion detection operation being a detection of a motion, Dvash teaches, “identifying moving targets in filtered data 1318; transferring target data to the fall identification module 1320; tracking the moving targets over time; assigning posture to the targets 1322; storing a posture history in a memory unit 1324; applying fall detection rules 1326; and generating a fall alert 1330 if a fall is detected 1328.” See ¶ 0077; Zack teaches, “FIG. 4N clearly illustrates the ability of the analysis described above to separate motions that are categorized, in the non-limiting illustrated case, as fall motions and as regular motions.” See ¶ 0113, and to perform the fall detection operation, the processor is further configured to: determine whether the motion meets a fall threshold; and when the motion meets the fall threshold, determine that a fall has occurred, Dvash teaches, “[a] threshold T is defined such that if Mi<T there is no anomaly in the fall detection.” See ¶ 0104; Consider claim 8, a method of operating an electronic device, the method comprising: transmitting and receiving ultrawide band (UWB) radar signals; detecting, based on the received UWB radar signals, a human within a detection area of the electronic device; performing, based on the detection of the human within the detection area of the electronic device, a motion detection operation; and performing, based on a result of the motion detection operation, a fall detection operation, See rejection of claim 1. Consider claim 10, the method of claim 8, wherein: the electronic device comprises at least two receive (RX) antennas configured to receive the UWB radar signals; and to perform the motion detection operation, the method further comprises: determining an azimuth difference in the UWB radar signals received by the at least two RX antennas; determining an angle of arrival based on the determined azimuth difference; and estimating a location of the human based on the angle of arrival and a range bin, See rejection of claim 3. Consider claim 11, the method of claim 8, wherein: the fall detection operation is performed based on the result of the motion detection operation being a detection of a motion; and to perform the fall detection operation, the method further comprises: determining whether the motion meets a fall threshold; and when the motion meets the fall threshold, determining that a fall has occurred, See rejection of claim 4. Consider claim 15, a non-transitory computer readable medium embodying a computer program, the computer program comprising program code that, when executed by a processor of a device, causes the device to: transmit and receive ultrawide band (UWB) radar signals; detect, based on the received UWB radar signals, a human within a detection area of the device; perform, based on the detection of the human within the detection area of the device, a motion detection operation; and perform, based on a result of the motion detection operation, a fall detection operation, See rejection of claim 1. Consider claim 17, the non-transitory computer readable medium of claim 15, wherein: the device comprises at least two receive (RX) antennas configured to receive the UWB radar signals; and to perform the motion detection operation, the program code, when executed by the processor of the device, further causes the device to: determine an azimuth difference in the UWB radar signals received by the at least two RX antennas; determine an angle of arrival based on the determined azimuth difference; and estimate a location of the human based on the angle of arrival and a range bin, See rejection of claim 3. Consider claim 18, the non-transitory computer readable medium of claim 15, wherein: the fall detection operation is performed based on the result of the motion detection operation being a detection of a motion; and to perform the fall detection operation, the program code, when executed by the processor of the device, further causes the device to: determine whether the motion meets a fall threshold; and when the motion meets the fall threshold, determine that a fall has occurred, See rejection of claim 4. Claim(s) 5-7, 12-14 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Dvash et al. (US 2022/0283292 A1) and further in view of Zack et al. (US 2018/0330593 A1), and further in view of Tan et al. (US 2020/0342735 A1). Consider claim 5, the electronic device of claim 4, wherein to determine whether the motion meets the fall threshold, the processor is further configured to determine whether at least one of: a change in an elevation angle of the human has exceeded an elevation change threshold, in an analogous art Tan teaches, “determining whether a user has fallen using a mobile device.” See ¶ 0002, Tan teaches, “a user falling can be identified, at least in part, by identifying motion data—collected immediately prior to an identified impact—having a sufficiently large change in pose angle (e.g., greater than a particular threshold angle), combined with a sufficiently long fall duration (e.g., greater than a particular threshold amount of time). In practice, the threshold angle and/or the threshold amount of time can vary (e.g., based on empirically collected sample data).” See ¶ 0218; or a velocity of the human has exceeded a velocity threshold, Tan teaches, “If the acceleration measurements change by a sufficiently large degree during the second period of time after the detected fall (e.g., the change in acceleration exceeds a particular threshold level), the device can determine that the user has moved after the fall.” See ¶ 0352. It would have been obvious to one of ordinary skilled in the art at the time of invention (effective filing date for AIA application) to modify the combination of Dvash-Zack and determine “sufficiently large change in pose angle… greater than a particular threshold angle,” “change in acceleration exceeds a particular threshold level,” and “during a “timeout” period of time …elapses (e.g., 30 seconds) and a continuous period of quiescence” should be used to determine whether a person has fallen, thereby properly identifying the characteristics of the event and take appropriate help for the fallen person. Consider claim 6, the electronic device of claim 1, wherein: the fall detection operation is performed based on the result of the motion detection operation being no detection of a motion, Tan teaches, “[t]he mobile device can transition between each of the states based on the detection of certain signatures of a fall (e.g., as described herein), detected periods of quiescence (e.g., a lack of movement by the user), and/or inputs by the user” See ¶ 0269; and to perform the fall detection operation, the processor is further configured to perform an inactivity tracking operation, Tan teaches, “the mobile device begins at the “nominal” state 1202. Upon detection of a fall signature (e.g., a combination of sensor measurements and other information indicative of a fall), the mobile device transitions of the “likely fall” state 1204. Upon detection of a period of quiescence TQ after the fall, the mobile device transitions to the “alert” state 1208b, and alerts the user of the detected fall… If no user movement is detected during a “timeout” period of time after the transmission of the fall alert elapses (e.g., 30 seconds) and a continuous period of quiescence (e.g., TLL=10 seconds) is detected during this time, the mobile device transitions to the “SOS” state 1212 and transmits the distress call.” See ¶ 0270. Consider claim 7, the electronic device of claim 6, wherein to perform the inactivity tracking operation, the processor is further configured to: determine whether the human has been inactive for a time threshold Tan teaches, “[t]he mobile device can transition between each of the states based on the detection of certain signatures of a fall (e.g., as described herein), detected periods of quiescence (e.g., a lack of movement by the user), and/or inputs by the user” See ¶ 0269; and when the human has been inactive for at least the time threshold, determine that a fall has occurred, Tan teaches, “the mobile device begins at the “nominal” state 1202. Upon detection of a fall signature (e.g., a combination of sensor measurements and other information indicative of a fall), the mobile device transitions of the “likely fall” state 1204. Upon detection of a period of quiescence TQ after the fall, the mobile device transitions to the “alert” state 1208b, and alerts the user of the detected fall… If no user movement is detected during a “timeout” period of time after the transmission of the fall alert elapses (e.g., 30 seconds) and a continuous period of quiescence (e.g., TLL=10 seconds) is detected during this time, the mobile device transitions to the “SOS” state 1212 and transmits the distress call.” See ¶ 0270. Consider claim 12, the method of claim 11, wherein to determine whether the motion meets the fall threshold, the method further comprises determining whether at least one of: a change in an elevation angle of the human has exceeded an elevation change threshold; or a velocity of the human has exceeded a velocity threshold, See rejection of claim 5. Consider claim 13, the method of claim 8, wherein: the fall detection operation is performed based on the result of the motion detection operation being no detection of a motion; and to perform the fall detection operation, the method further comprises performing an inactivity tracking operation, See rejection of claim 6. Consider claim 14, the method of claim 13, wherein to perform the inactivity tracking operation, the method further comprises: determining whether the human has been inactive for a time threshold; and when the human has been inactive for at least the time threshold, determining that a fall has occurred, See rejection of claim 7. Consider claim 19, the non-transitory computer readable medium of claim 18, wherein to determine whether the motion meets the fall threshold, the program code, when executed by the processor of the device, further causes the device to determine whether at least one of: a change in an elevation angle of the human has exceeded an elevation change threshold; or a velocity of the human has exceeded a velocity threshold, See rejection of claim 5. Consider claim 20, the non-transitory computer readable medium of claim 15, wherein: the fall detection operation is performed based on the result of the motion detection operation being no detection of a motion; and to perform the fall detection operation, the program code, when executed by the processor of the device, See rejection of claim 6. further causes the device to perform an inactivity tracking operation comprising: determining whether the human has been inactive for a time threshold; and when the human has been inactive for at least the time threshold, determining that a fall has occurred, See rejection of claim 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wu, Chenshu et al. (US 2022/0026531 A1) teaches, system apply Short-time Fourier-Transform on channel impulse response to construct Range-Doppler-Time radar datacubes for possible fall detection See ¶ 0331. Sethuraman, Prasanna Kumar (US 2024/0168128 A1) teaches, “the detection of target peaks is done by computing a range doppler map out of clutter removed channel impulse responses, CIR and a mean doppler profile, wherein clutter removed channel impulse responses, CIRs, are averaged within one window.” See ¶ 0019 Any inquiry concerning this communication or earlier communications from the examiner should be directed to Omer S. Khan whose telephone number is (571)270-5146. The examiner can normally be reached 10:00 am to 8:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian A. Zimmerman can be reached at 571-272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Omer S Khan/Primary Examiner, Art Unit 2686
Read full office action

Prosecution Timeline

Sep 30, 2024
Application Filed
Dec 07, 2025
Examiner Interview (Telephonic)
Dec 19, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602989
REMOTE PAIRING USING AUDIO FINGERPRINTS
2y 5m to grant Granted Apr 14, 2026
Patent 12603658
SUCCESSIVE APPROXIMATION REGISTER A/D CONVERTER
2y 5m to grant Granted Apr 14, 2026
Patent 12597774
SYSTEM, METHOD, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12591225
FACTORY DATA GENERATION WITH INTELLIGENT LABELLING
2y 5m to grant Granted Mar 31, 2026
Patent 12592146
C-V2X MOBILE EDGE COMPUTING INTERFACE FOR MOBILE SERVICES
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
95%
With Interview (+40.1%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month