Prosecution Insights
Last updated: April 19, 2026
Application No. 17/960,154

MICRO-MOTION SENSING DEVICE AND SENSING METHOD THEREOF

Final Rejection §103
Filed
Oct 05, 2022
Examiner
GUYAH, REMASH RAJA
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
HTC Corporation
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
68 granted / 89 resolved
+24.4% vs TC avg
Strong +34% interview lift
Without
With
+34.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 89 resolved cases

Office Action

§103
Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/20/2025 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment No amendments to the claims were made. Claims 1-3, 5-11, and 13-14 are pending. Response to Arguments Applicant's arguments filed 01/13/2026 have been fully considered but they are not persuasive. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. Applicant’s arguments do not overcome the prima facie case of obviousness. The § 103 rejection over Sehgal et al. (US 2020/0293753 A1) in view of Border et al. (US 2012/0057029 A1) is MAINTAINED. Argument regarding “motion sensor element”: The applicant argues that Sehgal “only discloses calculating the distance between the target object and the electronic device” and “does not provide any technical instructions on calculating the speed of the target object.” However, this argument fails to fully consider the Doppler radar principles disclosed in Sehgal. Sehgal explicitly discloses at [0068]: “Depending on the radar type, various forms of radar signals exist. One example is a Channel Impulse Response (CIR)… In certain embodiments, CIR measurements are collected from transmitter and receiver antenna configurations which when combined can produce a multidimensional image of the surrounding environment. The different dimensions can include the azimuth, elevation, range, and Doppler.“ The explicit mention of “Doppler” in Sehgal [0068] directly teaches obtaining velocity information, as Doppler radar fundamentally measures velocity based on the Doppler frequency shift. One of ordinary skill in the art would understand that Doppler radar measurements inherently provide velocity information of moving objects. This is basic radar technology knowledge, not requiring invention or hindsight. Furthermore, Sehgal discloses at [0054] that “the camera 275 can capture a still image or video,” which expressly teaches generating video stream information (i.e., a plurality of frames). The applicant’s argument that Sehgal only generates “an image” ignores this explicit teaching. The scope and content of Sehgal clearly discloses both radar with Doppler (velocity) capability and camera video capture. The claimed invention’s combination of these known elements would have been obvious to combine for the purpose of obtaining both spatial (range) and temporal (velocity) information about objects, which is the natural and intended use of Doppler radar systems. Argument regarding “motion classifier element”: The applicant argues that Sehgal fails to disclose “predicting various changes in a facial expression” because Sehgal only performs “authentication” by comparing facial signature data with preregistered reference data. However, this argument improperly conflates the specific application (authentication) with the underlying technical capability. Sehgal discloses at [0035]: “the server 104 is a neural network that is configured to extract features from images or radar signatures for authentication purposes.” Sehgal further discloses at [0008] receiving “facial signature data generated based on input from a radar source” and “facial image data generated based on input from a camera,” and at [0117] discloses “identifying a weight to assign to the facial signature data, the facial image data, or both” based on reliability. The capability to extract and analyze facial features from combined radar and camera data, as Sehgal teaches, would make it obvious to one of ordinary skill in the art to apply this technology to predict changes in facial expressions rather than just compare to static reference data. The underlying technical teaching—analyzing facial data from radar velocity/range information combined with video stream information—is the same. The differences between the prior art and the claims are merely in the specific application of the facial analysis technology (prediction of expression changes vs. authentication comparison). It would have been obvious to one of ordinary skill in the art to use Sehgal’s neural network-based facial analysis system (which processes radar signatures with velocity/range data and facial images) for the purpose of predicting facial expression changes rather than just authentication. The motivation would be to expand the utility of the facial monitoring system beyond authentication to include expression monitoring, which is a known application in human-computer interaction and user experience monitoring. The reasonable expectation of success would be high because the same technical components (radar, camera, neural network processor analyzing facial features from both sources) are used - only the output application differs. Argument regarding “render element”: The applicant argues that Border “does not teach or suggest about the velocity information and the motion prediction information” and therefore does not teach adjusting frame rate according to both velocity information and motion prediction information as claimed. This argument fails to properly consider the obviousness analysis under 103. Border explicitly teaches at [0012] “capturing a video of a scene depending on the speed of motion in the scene” and “determining the relative speed of motion“ and [0028] “motion identification is done by comparing the last two consecutive images.” Border clearly teaches adjusting frame rate based on detected motion speed. When combined with Sehgal’s teaching of using radar to obtain velocity information (Doppler) and using neural networks to analyze facial data, it would have been obvious to one of ordinary skill in the art to use Border’s adaptive frame rate technique with Sehgal’s radar-derived velocity information and neural network-based facial analysis (motion prediction). One of ordinary skill in the art would have been motivated to combine Sehgal’s radar-camera facial monitoring system with Border’s adaptive frame rate capture for the purpose of optimizing computational efficiency and storage requirements. When monitoring facial expressions using Sehgal’s system, applying Border’s teaching to adjust frame rates based on detected motion would conserve processing resources during low-motion periods and capture higher detail during high-motion periods (e.g., rapid facial expression changes). One of ordinary skill in the art would have had a reasonable expectation of success in combining these teachings because Border’s frame rate adjustment mechanism is inherently compatible with any video capture system that has motion detection capability. Sehgal provides the motion detection through radar (velocity) and facial analysis; Border provides the frame rate adjustment based on that detected motion. Both references operate in the same field (video capture with motion analysis) and address complementary aspects of the problem. The scope and content of the prior art (Sehgal plus Border) teaches all limitations of the claim. The differences are minimal - the claims simply specify that the frame rate adjustment is based on “velocity information and motion prediction information,” both of which are taught by the combination (Sehgal’s radar velocity/Doppler and facial feature analysis combined with Border’s motion-based frame rate adjustment). The level of ordinary skill in the art for video processing and radar-camera fusion systems would make this combination obvious. The secondary considerations are not applicable here as no unexpected results or commercial success evidence has been presented. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-11, and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Sehgal et al. (US 2020/0293753 A1) in view of Border et al. (US 2012/0057029 A1). Regarding Claims 1 and 9, Sehgal et al. (‘753) in view of Border et al. (‘029) teaches: A micro-motion sensing device, comprising: Sehgal et al. (‘753) teaches a motion sensor, configured to receive an input radar signal ([0052]: “In this embodiment, one of the one or more transceivers in the transceiver 210 includes is a radar transceiver 270 configured to transmit and receive signals for detection and ranging purposes”; [0053]: “The receiver can receive the mm Wave signals originally transmitted from the transmitter after the mm Wave signals have bounced or reflected off of target objects in the surrounding environment of the electronic device 200”), Sehgal et al. (‘753) teaches and receive a video stream information through an image capturer ([0054]: “The camera 275 can capture a still image or video. The camera 275 can capture an image of a body part of the user, such as the users face”), Sehgal et al. (‘753) teaches wherein the motion sensor generates a first video stream information by obtaining velocity information and range information of an object according to the input radar signal ([0053]: “The processor 240 can analyze the time difference between when the mm Wave signals are transmitted and received to measure the distance of the target objects from the electronic device 200”; [0068]: “In certain embodiments, CIR measurements are collected from transmitter and receiver antenna configurations which when combined can produce a multidimensional image of the surrounding environment. The different dimensions can include the azimuth, elevation, range, and Doppler”), Sehgal et al. (‘753) does not explicitly teach and processing the video stream information according to the velocity information, but Border et al. (‘753) teaches ([Abstract]: “causing a capture rate of the first region of the video of the scene to be greater than a capture rate of the second region of the video of the scene”; [0014]: “determining the relative speed of motion in a first region of the video of the scene with respect to the speed of motion in a second region”; [0028]: “the regions with rapid motion have three images captured after the motion has been identified, while in the same amount of time, only one image of the regions with slow motion or no motion is captured”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the radar and camera fusion system of Sehgal et al. (‘753) with the velocity-based video processing of Border et al. (‘029). One would have been motivated to do so in order to improve the efficiency and accuracy of facial recognition systems by processing video data based on detected motion/velocity, thereby reducing computational resources for static or slow-moving subjects while maintaining high quality capture for moving subjects. One skilled in the art would have had a reasonable expectation of success because Sehgal already teaches using radar and camera together for facial analysis with Doppler capabilities, Border teaches motion-based video processing techniques, and the combination merely applies known motion-detection and adaptive video processing techniques to Sehgal’s radar-camera facial recognition system. Sehgal et al. (‘753) teaches a motion classifier, coupled to the motion sensor and generating a motion prediction information by predicting a motion of the object according to the velocity information, the range information, and the first video stream information ([0008]: “In yet another embodiment a non-transitory computer readable medium embodying a computer program is provided. The computer program comprising computer readable program code that, when executed by a processor of an electronic device, causes the processor to: receive a request for authentication, facial signature data generated based on input from a radar source of the electronic device, and facial image data generated based on input from a camera of the electronic device”; [0035]: “In certain embodiments, the server 104 is a neural network that is configured to extract features from images or radar signatures”; [0177]: “The authenticating engine 460 then generates a score by comparing the extracted features of the facial image data with a first set of preregistered data”), Sehgal et al. (‘753) teaches wherein the motion classifier generates the motion prediction information by predicting various changes in a facial expression of a human ([0054]: “The camera 275 can capture an image of a body part of the user, such as the users face”; [0177]: “The authenticating engine 460 then generates a score by comparing the extracted features of the facial image data” implying analysis and prediction of facial features and expressions). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to extend Sehgal’s facial feature extraction and analysis system to predict changes in facial expressions based on velocity and range information. One would have been motivated to do so in order to enable proactive system responses, such as pre-allocating processing resources or adjusting capture parameters before significant facial movements occur, thereby improving system responsiveness and image quality. One skilled in the art would have had a reasonable expectation of success because Sehgal teaches facial analysis using radar and camera data with neural networks for feature extraction, and extending this to predict facial expression changes based on motion data is a straightforward application of known machine learning prediction techniques. Sehgal et al. (‘753) does not explicitly teach a render adjusting frame rate, but Border et al. (‘029) teaches a render, coupled to the motion classifier and adjusting frame rate of the first video stream information according to the velocity information and the motion prediction information ([Abstract], [0012]: “causing a capture rate of the first region of the video of the scene to be greater than a capture rate of the second region of the video of the scene”; [claim 1]: “causing the capture rate of the pixels of the first region to be greater than the capture rate of the pixels of the second region”; [0028]: “the regions with rapid motion have three images captured after the motion has been identified, while in the same amount of time, only one image of the regions with slow motion or no motion is captured”; [0030]: “the method of locally increased capture rate combined with locally reduced exposure time is suited for capturing video frames of scenes which contain regions with the fastest motion”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the frame rate adjustment teachings of Border et al. (‘029) into the combined system of Sehgal et al. (‘753). One would have been motivated to do so in order to optimize bandwidth usage, reduce data storage requirements, and improve system efficiency while maintaining image quality during rapid motion events, providing the benefits of adaptive frame rate control specifically tailored to facial expression monitoring applications. One skilled in the art would have had a reasonable expectation of success because Border explicitly teaches frame rate adjustment based on motion/velocity, and applying this technique to Sehgal’s radar-camera system is a predictable combination of known elements yielding expected results. Regarding Claims 2 and 10, Sehgal et al. (‘753) teaches the micro-motion sensing device according to claim 1. wherein the motion sensor obtains the range information of the object by performing a Range-Doppler flow for the input radar signal and the velocity information of the object by performing a Doppler frequency based motion flow for the input radar signal (Fig. 2, [0068]: “In certain embodiments, CIR measurements are collected from transmitter and receiver antenna configurations which when combined can produce a multidimensional image of the surrounding environment. The different dimensions can include the azimuth, elevation, range, and Doppler”). Regarding Claims 3 and 11, Sehgal et al. (‘753) in view of Border et al. (‘029) teaches the micro-motion sensing device according to claim 1. Sehgal et al. (‘753) does not explicitly teach, but Border et al. (‘029) teaches wherein the motion sensor determines whether to perform a downsampling on the video stream information according to the velocity information ([0014]: “determining the relative speed of motion in a first region of the video of the scene with respect to the speed of motion in a second region”; [claim 1 and 6]: “causing the capture rate of the pixels of the first region to be greater than the capture rate of the pixels of the second region, or causing an exposure time of the first region to be less than an exposure time of the second region… wherein the capture rate of the second region is reduced as the capture rate of the first region is increased”), Sehgal et al. (‘753) does not explicitly teach, but Border et al. (‘029) teaches wherein in response to velocity information indicating that a velocity of the object is lower than a reference value, the motion sensor performs the downsampling on the video stream information ([0028]: “only one image of the regions with slow motion or no motion is captured”; [0030]: “the method of locally reduced exposure time at the base capture rate is suited to less rapid motion”), Sehgal et al. (‘753) does not explicitly teach, but Border et al. (‘029) teaches and wherein the reference value is set in advance (Border et al. [0028-0030]: implicitly teaches predetermined thresholds for distinguishing between rapid motion regions and slow/no motion regions). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to downsample video stream information when velocity is below a predetermined reference value. One would have been motivated to do so in order to reduce data processing requirements, bandwidth usage, and storage needs while maintaining full sampling rates for higher velocity objects where detail is more critical, which is consistent with Border’s teaching of adjusting capture rates based on motion levels. One skilled in the art would have had a reasonable expectation of success because downsampling based on velocity thresholds is a straightforward application of Border’s motion-based video processing teachings, and such conditional processing based on threshold values is conventional in digital signal processing. Claim 4 is canceled. Regarding Claim 5, Sehgal et al. (‘753) in view of Border et al. (‘029) teaches the micro-motion sensing device according to claim 1. Sehgal et al. (‘753) does not explicitly teach, but Border et al. (‘029) teaches wherein the render comprises: a frame generator, configured to generate at least one frame according to the motion prediction information ([0011]: “a faster frame rate capture of video for rapid motion”; [0028]: “the regions with rapid motion have three images captured after the motion has been identified” implying frame generation based on detected motion), Sehgal et al. (‘753) does not explicitly teach, but Border et al. (‘029) teaches and an up sampler, generating an output video stream information by inserting one or more of the at least one frame into the first video stream information according to the velocity information ([0028]: “the regions with rapid motion have three images captured after the motion has been identified, while in the same amount of time, only one image of the regions with slow motion or no motion is captured” showing differential frame insertion based on velocity). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to generate and insert additional frames based on motion prediction and velocity information. One would have been motivated to do so in order to maintain smooth video playback when original capture rates are reduced during low-velocity periods, or to enhance temporal resolution during high-velocity facial expression changes, thereby optimizing both storage efficiency and viewing quality. One skilled in the art would have had a reasonable expectation of success because frame generation and insertion for upsampling is a well-known video processing technique, and Border’s teachings of motion-based frame rate adjustment naturally extend to generating and inserting intermediate frames. Regarding Claims 6 and 14, Sehgal et al. (‘753) in view of Border et al. (‘029) teaches the micro-motion sensing device according to claim 5. Sehgal et al. (‘753) does not explicitly teach, but Border et al. (‘029) teaches wherein a change in a frame rate of the first video stream information is positively correlated with a change in the velocity information ([0028]: “the regions with rapid motion have three images captured after the motion has been identified, while in the same amount of time, only one image of the regions with slow motion or no motion is captured”), Sehgal et al. (‘753) does not explicitly teach, but Border et al. (‘029) teaches wherein when the velocity information indicates that a velocity of the object increases, the frame rate of the first video stream information increases ([0028]: “the regions with rapid motion have three images captured” showing increased frame rate for increased velocity/rapid motion), Sehgal et al. (‘753) does not explicitly teach, but Border et al. (‘029) teaches wherein when the velocity information indicates that the velocity of the object decreases, the frame rate of the first video stream information decreases ([0028]: “only one image of the regions with slow motion or no motion is captured” showing decreased frame rate for decreased velocity/slow motion). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to implement a positively correlated relationship between frame rate changes and velocity information changes in the combined system of Sehgal et al. (‘753) and Border et al. (‘029),. One would have been motivated to do so in order to dynamically optimize system performance by allocating higher frame rates to capture detailed motion during high-velocity facial expressions while conserving processing power, bandwidth, and storage during low-velocity periods, thereby maximizing both image quality and resources efficiency. One skilled in the art would have had a reasonable expectation of success because Border explicitly teaches this positive correlation between motion/velocity and frame rate, demonstrating that increased motion results in increased capture rates and decreased motion results in decreased capture rates, and applying this principle to Sehgal’s facial monitoring system is a predictable use of Border’s established technique. Regarding Claim 7, Sehgal et al. (‘753) teaches the micro-motion sensing device according to claim 1. further comprising: a first radar, coupled to the motion sensor and generating the input radar signal by sending a radio wave to the object ([0052-0053]: “In this embodiment, one of the one or more transceivers in the transceiver 210 includes is a radar transceiver 270 configured to transmit and receive signals for detection and ranging purposes… The transmitter can transmit millimeter wave (mm Wave) signals”; [0058]: “The transmitter 304 transmits a signal 314 to the target object 308”). Regarding Claim 8, Sehgal et al. (‘753) teaches the micro-motion sensing device according to claim 7. further comprising: a second radar coupled to the motion sensor and generating the input radar signal by detecting the motion optically, wherein the second radar is an optical radar ([0054]: “The camera 275 can capture a still image or video” – the camera can be considered an optical detection system; alternatively, the system architecture supports multiple sensors as shown in [Figure 4] showing camera 410 and radar transceiver 420 as separate but coupled components). Regarding Claims 9-11 and 13-14, Sehgal et al. (‘753) in view of Border et al. (‘029) teaches these method claims corresponding to device claims 1-6 for substantially the same reasons and rationale. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to REMASH R GUYAH whose telephone number is (571)270-0115. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vladimir Magloire can be reached at (571) 270-5144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /REMASH R GUYAH/Examiner, Art Unit 3648 /VLADIMIR MAGLOIRE/Supervisory Patent Examiner, Art Unit 3648
Read full office action

Prosecution Timeline

Oct 05, 2022
Application Filed
Dec 03, 2024
Non-Final Rejection — §103
Mar 04, 2025
Response Filed
Jun 10, 2025
Final Rejection — §103
Aug 20, 2025
Request for Continued Examination
Aug 25, 2025
Response after Non-Final Action
Oct 15, 2025
Non-Final Rejection — §103
Jan 13, 2026
Response Filed
Feb 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601828
WEARABLE DEVICE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12596174
DISTANCE MEASUREMENT DEVICE, DISTANCE MEASUREMENT METHOD, AND RADAR DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591038
RADAR CONTROL DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591067
METHOD AND APPARATUS FOR COOPERATIVE MULTI-TARGET ASSIGNMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12578460
GUARD BAND ANTENNA IN A BEAM STEERING RADAR FOR RESOLUTION REFINEMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+34.2%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 89 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month