Prosecution Insights
Last updated: April 19, 2026
Application No. 18/532,268

METHOD AND DEVICE FOR IDENTIFYING USER'S MOVEMENT

Non-Final OA §103
Filed
Dec 07, 2023
Examiner
GUYAH, REMASH RAJA
Art Unit
3648
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Jimbo Robotics Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
68 granted / 89 resolved
+24.4% vs TC avg
Strong +34% interview lift
Without
With
+34.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 89 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/07/2023 is in compliance with the provisions of 35 CFR 1.97. Accordingly, the IDS has been considered by the examiner. Claim Objections Claims 1-3, 5 and 11-13 objected to because of the following informalities: Claims 1 and 11: “at least one of UWB anchor sensor” should be “at least one UWB anchor sensor” Claims 1 and 11: “at least any of one of” should be “at least one of” Claims 2 and 12: Missing article – “the tracking device performing tracking the user” should be “the tracking device performing tracking of the user” Claims 3 and 13: “forward among the directions of two perpendicular bisectors” would be clearer if worded as “forward along one of two opposite directions defined by a perpendicular bisector of a line between the two UWB tag sensors…” Claim 5: “it is determined as a sitting and otherwise” should be “it is determined as a sitting pattern and otherwise”. Appropriate correction is required. Drawings The drawings are objected to because: Figs. 3A-C, 5A-C, 6A-C, 7A-C, and 8A-C have textual descriptions that are difficult or nearly impossible to read PNG media_image1.png 327 849 media_image1.png Greyscale Fig. 9B, Step1 refers to Cosine Simitarity, which appears to be a misspelling. Fig. 10 refers to "Reference value" without definition in the specification. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "control unit for receiving distance signals” in claim 11 . Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1–20 are rejected under 35 U.S.C. §103 as being unpatentable over Park et al. (US 2019/0056422 A1) in view of Vaidyanathan et al. (US 2016/0262687 A1) and further in view of Baldwin et al. (US 2019/0311099 A1). Regarding Claims 1 and 11, Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches: Park et al. (‘422) teaches: A method for identifying the movement of a user, comprising: Park et al. (‘422) teaches systems and methods for human body motion capture that identify and reconstruct movement of a subject. ([0002]: “The present disclosure relates to capturing motion of the human body. Particular embodiments provide systems and methods for accurately capturing lower body motion without the use of magnetometers.”; [0031]: “The following describes substantially magnetometer-free systems and methods for lower-body MoCap including 3-D localization and posture tracking by fusing inertial sensors with an ultra-wideband (UWB) localization system and a biomechanical model of the human lower-body.”) Park et al. (‘422) additionally teaches a system comprising UWB sensors 106 (anchor sensors) that communicate with UWB tags 104 worn on both feet/legs of a user. ([0035]: “The system 100 comprises seven IMUs 102, three UWB tags 104, a UWB sensor system comprising a plurality of UWB sensors 106, and a processing system 108 in communication with the IMUs 102 and UWB sensors 106.”; [0084]: “two compact tags are attached to the mid feet.”). Park et al. (‘422) teaches: acquiring, by at least one of UWB anchor sensor, signals from a pair of UWB tag sensors each attached to both legs of a user, wherein the UWB anchor sensor is installed within communication range of the UWB tag sensors; Park et al. (‘422) teaches acquiring signals using UWB sensors (anchor sensors) from UWB tags attached to a user’s body including the lower body (legs/feet). (Fig. 1, [0035]: “The system 100 comprises seven IMUs 102, three UWB tags 104, a UWB sensor system comprising a plurality of UWB sensors 106, and a processing system 108 in communication with the IMUs 102 and UWB sensors 106… The UWB sensors 106 detect the position of the UWB tags 104 and generate position signals for each UWB tag 104 which are provided to the processing system either wirelessly or by wired connections.”; [0084]: “one slim tag is attached to the subject’s waist and two compact tags are attached to the mid feet.”). Park et al. (‘422) teaches UWB tags on both feet (i.e., both legs) of the user that communicate with UWB anchor sensors within range. The UWB sensors 106 serve as anchor sensors that detect the UWB tags 104 on the user’s body. While Park et al. (‘422) teaches UWB tags on both feet (i.e., on the lower extremities of both legs), Park et al. (‘422) does not explicitly disclose that the UWB tags are “each attached to both legs” in the sense of being directly on each leg as opposed to each foot. However, it would have been an obvious design choice to one of ordinary skill in the art to position UWB tags on the legs rather than the feet, as Park et al. (‘422) already teaches placement of tags on the lower body extremities and the specific placement on the leg versus foot is a matter of routine engineering optimization for tracking lower body motion. The feet are part of the legs, and attaching a tag to the foot of each leg constitutes attaching a tag to each leg. Park et al. (‘422) teaches: processing the acquired signals of the two UWB tag sensors into at least any one of distance, speed and acceleration signals; Park et al. (‘422) teaches processing the UWB tag signals into position (distance), velocity (speed), and acceleration signals. ([0035]: “The position signals from the UWB sensors 106 may be used by the processing to determine corresponding velocity signals in some embodiments.”; [0048]: “The localization Kalman filter 230 provides drift-free position and 3D orientation of the feet and waist without using magnetometers. This filter 230 fuses IMUs with a UWB localization system for the following three localization points on the lower-body: the right and left mid foot (rm and lm) and the waist (w).”; [0008]: “a processing system in communication with each IMU and with the localization sensor system to receive the rate of turn, acceleration, and position signals, and to derive velocity signals for each localization tag from the position signals.”). Park et al. (‘422) processes the acquired signals into position vectors (distance), velocity signals (speed), and acceleration signals. Park et al. (‘422) does not explicitly teach: calculating a cosine similarity of at least one of the distance, speed and acceleration signals to a reference walking pattern stored in advance; However, Baldwin et al. (‘099) teaches calculating a cosine similarity as a measure of comparing motion-derived signals to reference patterns. ([0033]: “The similarity scoring approach uses, in a particular example, a combination of three scoring measures: cosine similarity as between the verification determinate vector(s) and the enrollment signature, L2 distance… and z-score of the verification determinate vector(s) relative to the enrollment signature.”; [0051]: “Aspects seek to determine if there are regular, unique patterns in how an individual walks that can be reliably detected by an inertial sensor and used to verify or identify a subject.”). Baldwin et al. (‘099) specifically applies cosine similarity to gait (walking) data captured from sensors to compare verification gait data to an enrollment reference pattern stored in advance. Additionally, Vaidyanathan et al. (‘687) teaches classifying movement patterns including walking patterns by comparing acquired sensor signals to reference patterns. ([0064]: “The classification processor is configured to receive the motion signals from the motion sensor 3 and may also receive the mechanomyographic muscle signals from the vibration sensor 4, and to classify a pattern of movement, or a posture, of at least one part of the body.”; [0066]-[0068]: “The classification of walking can be detected by, for example, combining accelerometer data from each plane to determine magnitude… A threshold is determined per subject by a controlled walking task of five steps in a straight line.”). Vaidyanathan et al. (‘687) teaches storing a reference walking pattern in advance (a threshold from a controlled walking task) and comparing motion signals to it to classify movement. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the UWB-based motion tracking system of Park et al. (‘422) with the cosine similarity comparison technique of Baldwin et al. (‘099) and the walking pattern classification of Vaidyanathan et al. (‘687). One would have been motivated to do so because Park et al. (‘422) already processes distance, speed, and acceleration signals from UWB sensors for motion analysis, and Baldwin et al. (‘099) demonstrates that cosine similarity is an effective and well-known measure for comparing motion-derived signals (particularly gait/walking data) to stored reference patterns ([0033], [0051]). Vaidyanathan et al. (‘687) further demonstrates that comparing motion signals to stored reference walking patterns is a known technique for classifying user movement ([0064]-[0068]). Applying cosine similarity—a well-established mathematical similarity measure—to compare UWB-derived signals against stored walking patterns would yield the predictable result of enabling automated movement classification. There would be a reasonable expectation of success because cosine similarity is a straightforward mathematical operation applicable to any vector data, and Park et al. (‘422) already produces the vector signal data (distance, speed, acceleration) necessary for such comparison. Park et al. (‘422) does not explicitly teach, but Vaidyanathan et al. (‘687) teaches: determining the movement of the user as a walking pattern if the calculation result of the cosine similarity is greater than a preset similarity, and otherwise, determining the movement as a non-walking pattern. Vaidyanathan et al. (‘687) teaches determining movement as walking versus non-walking based on comparing motion signals to a threshold/preset criteria. ([0067]-[0068]: “A threshold is determined per subject by a controlled walking task of five steps in a straight line… Stationary states are determined to be whenever the magnitude is beneath the threshold… If the data window is deemed to be ‘active’ then the gait segment is classified as walking. However, if the data window states the subject is stationary, the calculation then determines which plane gravity is in to determine standing or sitting.”; [0079]: “The algorithm 1400 enables eight commonly performed activities to be identified: walking, running, ascending stairs, descending stairs, ascending in an elevator, descending in an elevator, standing, and lying down.”). Vaidyanathan et al. (‘687) classifies movement as walking when the comparison result exceeds a preset threshold, and otherwise classifies the movement as a non-walking pattern (standing, sitting, lying, etc.). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the motion tracking system of Park et al. (‘422) with the walking/non-walking pattern classification of Vaidyanathan et al. (‘687), using the cosine similarity technique of Baldwin et al. (‘099) as the comparison measure. One would have been motivated to do so because determining whether a user is walking or not is a fundamental aspect of human activity recognition that is valuable in the motion capture context of Park et al. (‘422), and Vaidyanathan et al. (‘687) demonstrates that threshold-based comparison against stored walking patterns is effective for this classification ([0067]-[0068], [0079]). Using cosine similarity (as taught by Baldwin et al. (‘099)) as the specific comparison measure with a preset threshold would yield the predictable result of classifying movement as walking or non-walking. There would be a reasonable expectation of success because Vaidyanathan et al. (‘687) demonstrates 97% accuracy in activity classification using threshold-based techniques ([0108]), and Baldwin et al. (‘099) demonstrates effectiveness of cosine similarity for gait comparison ([0033]). Regarding Claim 11, Claim 11 is directed to a tracking device. Claim 11 recite limitations that are parallel in nature as those addressed above for claim 1 which is directed towards a method. Claim 11 is therefore rejected for the same reasons as set forth above for claim 1, respectively. Furthermore, claim 11 recites a control unit (see Fig. 1, 108 Processing System of Park et. al. ('422)). Regarding Claims 2 and 12,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 1. Park et al. (‘422) teaches: wherein the UWB anchor sensor is installed in a tracking device, the tracking device performing tracking the user when the calculation result of the cosine similarity is determined as the walking pattern. Park et al. (‘422) teaches a processing system (tracking device) that includes UWB sensors (anchor sensors) and performs tracking of the user’s motion. ([0035]: “a UWB sensor system comprising a plurality of UWB sensors 106, and a processing system 108 in communication with the IMUs 102 and UWB sensors 106.”; [0031]: “systems and methods for lower-body MoCap including 3-D localization and posture tracking by fusing inertial sensors with an ultra-wideband (UWB) localization system.”; [0039]: “A typical MoCap system should be able to track the 3-D position of a localization point on the root segment.”). The system of Park et al. (‘422) actively tracks the user. As combined with Vaidyanathan et al. (‘687)’s walking classification, the tracking device performs tracking when walking is determined. Regarding Claims 3 and 13,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 1. Park et al. (‘422) teaches: wherein at a time point when the acquired distance signals of the UWB tag sensors are the same, in the case where the tracking device follows forward among the directions of two perpendicular bisectors to the two UWB tag sensors, the direction toward the tracking device is determined as the walking direction, and in the case where the tracking device follows backward among the directions of the two perpendicular bisectors, the opposite direction that is not toward the tracking device is determined as the walking direction. Park et al. (‘422) teaches a UWB system with UWB sensors (anchors) at fixed positions and UWB tags on both feet. The system tracks 3-D position vectors of the UWB tags and determines the orientation and direction of movement of the user. ([0035]: “The UWB sensors 106 detect the position of the UWB tags 104 and generate position signals for each UWB tag 104.”; [0048]: “This filter 230 fuses IMUs with a UWB localization system for the following three localization points on the lower-body: the right and left mid foot (rm and lm) and the waist (w). It is designed to estimate the position vectors for these three points.”; [0032]: “the UWB localization data is not only used for position tracking, but also aids in the estimation of yaw when fused with IMU in the novel magnetometer-free loosely-coupled localization filter.”). Park et al. (‘422)’s system determines the relative positions of both UWB tags on the feet and the UWB anchor sensors, which enables determining the walking direction based on the geometric relationship (perpendicular bisectors) between the tags when they are equidistant from the anchor. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to use the geometric relationship between the two UWB tag positions and the anchor position (including perpendicular bisectors when the two tags are equidistant) to determine walking direction in the system of Park et al. (‘422). Park et al. (‘422) already determines 3-D positions of both foot tags and the spatial relationship to the anchor sensors. Using perpendicular bisectors of two points to determine direction is a well-known geometric principle, and applying this to the equidistant condition of the two foot tags relative to an anchor sensor would be a straightforward application of geometry to determine the user’s heading direction. There would be a reasonable expectation of success because the position data is already available in Park et al. (‘422)’s system with sub-centimeter accuracy. Regarding Claims 4 and 14,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 1. Park et al. (‘422) does not explicitly teach, but Vaidyanathan et al. (‘687) teaches: wherein if the calculation result of the cosine similarity is determined as a non-walking pattern, when the acquired signals of the UWB tag sensors have a simultaneity greater than a preset criteria, it is determined as a sitting pattern or a simple back and forth movement pattern. Vaidyanathan et al. (‘687) teaches classifying non-walking movement patterns including sitting based on the characteristics of motion signals. ([0064]-[0068]: “The classification processor is configured to receive the motion signals from the motion sensor… to classify a pattern of movement, or a posture, of at least one part of the body. Typically, this would be the part of the body to which the wearable sensor is attached… The classification processor may be configured to distinguish between standing, sitting, reclining and walking activities and various different postures.”; [0065]: “When sitting, the y plane is now pointing towards the ground, which will give a reading of around −1±0.1 g.”; [0079]: “The algorithm 1400 enables eight commonly performed activities to be identified: walking, running, ascending stairs, descending stairs, ascending in an elevator, descending in an elevator, standing, and lying down.”). When the user is sitting, the signals from sensors on both legs would exhibit simultaneity (i.e., both legs are stationary simultaneously), which is the basis for distinguishing sitting from other non-walking patterns. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the sitting/stationary classification of Vaidyanathan et al. (‘687) into the combined system. One would have been motivated to do so because classifying non-walking patterns (such as sitting) based on the simultaneity of signals from sensors on both legs is a logical extension of the walking/non-walking classification. When a user is sitting, both legs are stationary simultaneously, resulting in simultaneous (i.e., similar/correlated) signal patterns. A person of ordinary skill would recognize that simultaneity in the signals from tags on both legs is a distinguishing characteristic of sitting or simple back-and-forth patterns versus other non-walking patterns. There would be a reasonable expectation of success because Vaidyanathan et al. (‘687) demonstrates effective classification of sitting using motion sensor data ([0065], [0079]). Regarding Claims 5 and 15,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 4. Park et al. (‘422) does not explicitly teach, but Vaidyanathan et al. (‘687) teaches: wherein if it is determined as the sitting pattern or the simple back and forth movement pattern, when the magnitude of the acceleration is greater than the preset criteria and there is no substantial change in the magnitude for a certain period of time before or after a drastic increase or decrease in the speed signal, it is determined as a sitting and otherwise, it is determined as the simple back and forth movement pattern. Vaidyanathan et al. (‘687) teaches distinguishing between sitting and other stationary/low-movement patterns based on acceleration characteristics. ([0065]: “During a standing posture, the x plane is pointing towards the ground, which gives a reading around −1±0.1 g, and the other planes (y and z) give a reading of 0±0.1 g. When sitting, the y plane is now pointing towards the ground, which will give a reading of around −1±0.1 g.”; [0085]-[0086]: “In a gross group clustering stage 1402, a data or time window is split into one of three groups, each of which relates to a different class of activities: stationary activities 1424 (such as standing, lying down, and elevator), dynamic activities 1426 (such as walking, running, and noise)”; [0100]: “The stationary group 1424 is defined by determining 1432 that an average gyroscopic magnitude for one of the groups is below a first threshold rate.”). When sitting, there is a gravitational acceleration component (magnitude greater than preset criteria) that remains substantially constant (no substantial change) despite any brief transition event (drastic increase/decrease in speed when sitting down), distinguishing it from a back-and-forth movement which would show periodic variations. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to further distinguish between sitting and simple back-and-forth patterns using acceleration magnitude analysis. The rationale is that when a person sits down, there is typically a transient spike in acceleration/speed signals followed by a period of no substantial change (static sitting), which differs from back-and-forth movement which shows ongoing periodic changes. Vaidyanathan et al. (‘687) teaches analyzing both static and dynamic characteristics of acceleration signals for activity classification, providing a reasonable expectation of success. Regarding Claims 6 and 16,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 4. Park et al. (‘422) does not explicitly teach, but Vaidyanathan et al. (‘687) teaches: wherein when the acquired signals of the UWB tag sensors do not have simultaneity greater than the preset criteria, it is determined as an in-place rotation pattern or a hanging-around pattern. Vaidyanathan et al. (‘687) teaches classifying various non-walking movement patterns. ([0079]: “The algorithm 1400 enables eight commonly performed activities to be identified: walking, running, ascending stairs, descending stairs, ascending in an elevator, descending in an elevator, standing, and lying down. A ninth ‘activity’ containing noise and other unclassified activities data is also categorized.”). When the signals from the two leg-mounted tags lack simultaneity (i.e., the two legs are moving differently from each other but not in a walking pattern), a person of ordinary skill would recognize this as indicative of in-place rotation (where one leg pivots differently than the other) or hanging-around (where the legs exhibit irregular, non-correlated small movements). This is a logical classification step following the sitting/back-and-forth determination. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to classify non-simultaneous, non-walking leg patterns as in-place rotation or hanging-around patterns. One would have been motivated to do so because comprehensive activity classification requires distinguishing between all major movement types. When signals from two leg sensors lack simultaneity and are not walking, the remaining possibilities logically include rotation (e.g., turning in place, which causes asymmetric leg movement) or hanging-around (aimless standing with sporadic leg adjustments). Vaidyanathan et al. (‘687) establishes the framework for multi-activity classification, providing a reasonable expectation of success. Regarding Claims 7 and 17,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 6. Park et al. (‘422) does not explicitly teach, but Vaidyanathan et al. (‘687) teaches: wherein when the increase and decrease of the distance, speed and acceleration signals are regular and a peak value exists, it is determined as the in-place rotation pattern. Park teaches processing distance, speed, and acceleration signals and tracking yaw angles ([0032]: “the UWB localization data is not only used for position tracking, but also aids in the estimation of yaw.”), but Park does not explicitly teach determining an in-place rotation pattern based on regular increase and decrease of signals with a peak value. Vaidyanathan teaches classifying movement patterns by analyzing the regularity and characteristics of motion signals. ([0079]: “The algorithm 1400 enables eight commonly performed activities to be identified: walking, running, ascending stairs, descending stairs, ascending in an elevator, descending in an elevator, standing, and lying down.”; [0085]-[0086]: “In a gross group clustering stage 1402, a data or time window is split into one of three groups, each of which relates to a different class of activities: stationary activities 1424… dynamic activities 1426… In a subsequent activity classification stage 1404, windows from each cluster group are further classified into one of nine more specific activities.”; [0099]: “A gyroscopic magnitude and barometer gradient are determined for each window. An average of these parameters is calculated for each cluster group, which is then used to label which of the three groups of clusters belongs to the stationary, dynamic, or dynamic-altitude groups.”). Vaidyanathan teaches analyzing signal characteristics such as regularity and magnitude within windowed data to classify specific movement activities. When a user rotates in place, the distance, speed, and acceleration signals from UWB tags on the legs will exhibit a regular periodic pattern of increase and decrease, with a peak value occurring when the tag is farthest from the anchor sensor. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the UWB signal processing of Park with the activity classification techniques of Vaidyanathan to identify in-place rotation based on regular periodic patterns with peak values in the distance, speed, and acceleration signals. One would have been motivated to do so because Vaidyanathan demonstrates that analyzing signal regularity and magnitude characteristics is effective for classifying specific movement activities ([0079], [0085]-[0086], [0099]). A person of ordinary skill would recognize that in-place rotation produces sinusoidal-like distance variations from the UWB anchor, with corresponding regular speed and acceleration patterns and identifiable peak values. There would be a reasonable expectation of success because Vaidyanathan establishes a framework for windowed signal analysis that achieves 97% accuracy in activity classification ([0108]), and periodic signal analysis is a well-established technique in signal processing. Regarding Claims 8 and 18,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 7. Park et al. (‘422) teaches: wherein at the time point of the peak value, it is determined that the user is at 180-degree angle relative to the UWB anchor sensor. UWB-based position tracking with yaw angle estimation. ([0032]: “the UWB localization data is not only used for position tracking, but also aids in the estimation of yaw when fused with IMU.”). When a user rotates in place, the UWB tag on a leg reaches its maximum distance (peak value) from the anchor sensor when that tag is on the opposite side of the body from the anchor—i.e., when the user is facing away from the anchor at approximately 180 degrees relative to the anchor. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to determine that the peak distance value corresponds to a 180-degree orientation relative to the UWB anchor sensor. This is a straightforward geometric relationship: during in-place rotation, the maximum distance between a body-mounted tag and a fixed anchor occurs when the tag is on the far side of the body, i.e., when the user faces away from the anchor at 180 degrees. There would be a reasonable expectation of success because this is a direct geometric consequence of the known positions. Regarding Claims 9 and 19,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 6. Park et al. (‘422) does not explicitly teach, but Vaidyanathan et al. (‘687) and Baldwin et al. (‘099) teaches: wherein when the increase and decrease amplitude of the distance signal is less than a preset value and the cosine similarity of the speed signal and/or acceleration signal is greater than the preset criteria, it is determined as the hanging-around pattern. Note: This claim includes an “and/or” statement. The art need only teach the cosine similarity of the speed signal or the acceleration signal being greater than the preset criteria. Park processes distance, speed, and acceleration signals from UWB tags but does not explicitly teach determining a hanging-around pattern based on small distance amplitude combined with cosine similarity of speed/acceleration signals. Vaidyanathan teaches classifying low-amplitude, non-walking movements. ([0100]: “The stationary group 1424 is defined by determining 1432 that an average gyroscopic magnitude for one of the groups is below a first threshold rate, such as 50°/s.”). When the distance signal amplitude is small (less than a preset value), the user is not traveling a significant distance. Baldwin teaches cosine similarity comparison of motion signals to stored reference patterns ([0033]: “The similarity scoring approach uses, in a particular example, a combination of three scoring measures: cosine similarity as between the verification determinate vector(s) and the enrollment signature.”). A high cosine similarity in the speed or acceleration signal compared to a reference pattern would indicate a consistent, low-level pattern of small movements characteristic of hanging-around. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to identify the hanging-around pattern based on small distance amplitudes combined with cosine similarity of speed/acceleration signals. One would have been motivated to do so because hanging-around is characterized by remaining in approximately the same location (small distance changes) while making small, somewhat consistent body movements (high similarity to a reference pattern). Applying the cosine similarity technique of Baldwin to the speed/acceleration signals of Park in the classification framework of Vaidyanathan would yield the predictable result of detecting this pattern. There would be a reasonable expectation of success because the component techniques are individually proven. Regarding Claims 10 and 20,Park et al. (‘422) in view of Vaidyanathan et al. (‘687) and further in view of Baldwin et al. (‘099) teaches the method according to claim 1. Park et al. (‘422) does not explicitly teach, but Vaidyanathan et al. (‘687) teaches: wherein the distance, speed and acceleration signals are calculated using a simple moving average. Vaidyanathan et al. (‘687) teaches using a moving average to process motion signals. ([0087]: “Inertial data can, optionally, be smoothed using a moving average in order to reduce the effect of transient noise on the output of the algorithm. An example moving average is: PNG media_image2.png 45 278 media_image2.png Greyscale where ys.sub.n is the smoothed value for the n.sup.th data point, N is the number of neighbouring data points on either side of ys.sub.n, and 2N+1 is the span, such as 15 data points. Windowing 1430 can also be performed after smoothing has taken place”). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to apply the simple moving average technique taught by Vaidyanathan et al. (‘687) to the distance, speed, and acceleration signals in the system of Park et al. (‘422). One would have been motivated to do so in order to reduce the effect of transient noise on the signals, as expressly taught by Vaidyanathan et al. (‘687) ([0087]), thereby improving the accuracy of the movement classification. There would be a reasonable expectation of success because the simple moving average is a well-established signal processing technique universally applicable to time-series data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to REMASH R GUYAH whose telephone number is (571)270-0115. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vladimir Magloire can be reached at (571) 270-5144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. REMASH R GUYAH Examiner Art Unit 3648C /REMASH R GUYAH/Examiner, Art Unit 3648 /RESHA DESAI/Supervisory Patent Examiner, Art Unit 3648
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601828
WEARABLE DEVICE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12596174
DISTANCE MEASUREMENT DEVICE, DISTANCE MEASUREMENT METHOD, AND RADAR DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591038
RADAR CONTROL DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591067
METHOD AND APPARATUS FOR COOPERATIVE MULTI-TARGET ASSIGNMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12578460
GUARD BAND ANTENNA IN A BEAM STEERING RADAR FOR RESOLUTION REFINEMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+34.2%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 89 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month