Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed October 16, 2025 has been entered. Claims 1-2, 4-10, and 12-16 remain pending in this application. Claims 1, 4, 6, 8-9, and 12 have been amended. Claims 3 and 11 have been cancelled.
Response to Arguments
Applicant’s arguments, see pages 7-10, filed October 16, 2025, with respect to the rejections of claims 1-2, under 35 U.S.C. 102 have been fully considered and are persuasive. Specifically, Examiner agrees with Applicant’s argument that Abe fails to teach the processing of statistical features by a deep learning. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Zhang et al. (US 20230142676 A1).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5 and 13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 5 recites, “The radar point cloud-based posture determination system of claim 3, wherein […]”, and claim 13 recites, “The method of claim 11, further comprising […]”. However, claims 3 and 11 have been cancelled. Therefore, the claims are indefinite.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-10, and 12-16 are rejected under 35 U.S.C. 103 as being unpatentable over Abe et al. (JP 2019158862 A), hereinafter Abe, in view of Zhang et al. (US 20230142676 A1), hereinafter Zhang.
Regarding claims 1 and 9, Abe teaches a radio detection and ranging (radar) point cloud-based posture determination system and a method of determining a posture of a target respectively, comprising:
a memory, and a processor (para. 15, “The tracking device 2 comprises […] a memory unit [memory circuit] 207, a judgment unit [judgment circuit] 208 (all of which are processing circuits), and a judgment result output unit [judgment result output circuit or output circuit] 209.”), wherein the processor, when executing program instructions stored in the memory, is configured to perform:
a data collecting operation of acquiring three-dimensional (3D) point cloud data for a target using radar (para. 8, “The tracking method according to the present disclosure is configured to obtain the center of gravity of point cloud data obtained from radar waves reflected by a target, determine the horizontal position of the center of gravity, and discriminate the attitude of the target from the distribution of the point cloud data in at least one of the vertical and horizontal directions, and if the determined position and the discriminated attitude satisfy predetermined conditions, analyze the Doppler distribution of the target and judge the state of the target.”),
a clustering operation of clustering the 3D point cloud data to generate a point cluster of the target (para. 19, “The clustering processing unit 202 extracts reflection point clouds for each detected target from all reflection point clouds acquired from the radar 201, and performs clustering processing by grouping the extracted reflection point clouds into a cluster [group].”; para. 65, “The position determining unit 1005 derives the position of the center of gravity of the clustered points. Next, the position determination unit 1005 determines whether the bather is located in the bathtub or in the washing area based on the horizontal coordinate of the center of gravity position in the three-dimensional orthogonal space.”),
a feature calculating operation of calculating predefined statistical features on the basis of the generated point cluster (para. 24, “The storage unit 207 stores height information [information about the target] associated with a target to be identified [specific target]. In one example, the specific target is an individual identified by the tracking system 1. In another example, the specific target is an age group identified by the tracking system 1. In another example, the specific target is gender identified by the tracking system 1. The height information includes height features of the specific target.”; para. 99, “The tracking device disclosed herein includes a processing circuit that calculates a feature amount related to the vertical direction of the target based on one of point cloud data obtained from a radar wave reflected by a target, and identifies the target based on the feature amount and information related to the target that is associated with the feature amount.”), but fails to teach
a posture determining operation of determining a posture of the target on the basis of the calculated statistical features of the point cluster through a deep learning model trained to classify the posture of the target on the basis of the predefined statistical features of the point cluster,
wherein the deep learning model uses the calculated statistical features of the point cluster as input data to classify the posture of the target without using the point cluster of the target,
wherein, among the statistical features that are provided as input data of the deep learning model of the posture determining operation, statistical features calculated for a current frame include statistical features calculated for at least one previous frame.
However, Zhang teaches
a posture determining operation of determining a posture of the target on the basis of the calculated statistical features of the point cluster through a deep learning model trained to classify the posture of the target on the basis of the predefined statistical features of the point cluster (para. 114, “The embodiments of the disclosure provide the future trajectory and original data of a pedestrian at 10 HZ. Herein, the original data may include original images, point cloud points, ego-car poses and high-definition maps. In the embodiments of the disclosure, a first neural network and a second neural network [the first neural network and the second neural network may be implemented by using a model of a deep neural network algorithm] are used to obtain an output for the time-series location information and posture information of the object. Table 1 shows the precision of the pedestrian's face and body orientations under different distances between a pedestrian and ego car. The preset dataset provided by the embodiments of the disclosure may include pedestrian's face orientations, body orientations, pedestrian's locations, vehicle lamp information, vehicle head orientation information, etc. In this way, the first neural network and the second neural network are trained by using the dataset containing such abundant information, so that the generalization of the trained first neural network and the second neural network is stronger.”; point clustering implicitly occurs when point clouds are the input data and targets/objects/segments are identified; see Abe paras. 19 and 24 for further evidence of statistical features of a point cluster),
wherein the deep learning model uses the calculated statistical features of the point cluster as input data to classify the posture of the target without using the point cluster of the target (paras. 83-85, “The above operations S201 and S202 provide a mode for realizing ‘fusing the environmental information, the time-series location information and the time-series posture information to obtain the fusion feature’. Herein, the time-series location information and posture information are fused with local maps as the environmental information in an order of inputting the time-series location information and posture information into a neural network, which can improve the accuracy of designating areas of the local maps. […] In some embodiments of the disclosure, the second neural network may be a fully connected network for classifying fusion features. For example, the fully connected network is employed to predict the possibility that the fusion feature is each of the intention categories in the intention category library, and thus the confidence of each intention category may be obtained. In some embodiments of the disclosure, taking the object being a pedestrian as an example, the corresponding intention category library may include: turning left, turning right, going straight, standing still or turning around, etc. A fully connected network is employed to predict the confidence that the fusion feature is each of the following intention categories: turning left, turning right, going straight, standing still or turning around, etc., for example, to predict the probability of each intention category.”),
wherein, among the statistical features that are provided as input data of the deep learning model of the posture determining operation, statistical features calculated for a current frame include statistical features calculated for at least one previous frame (para. 49, “In some embodiments of the disclosure, the future trajectory of the object in a future period may be predicted based on the fusion feature and the motion intention. Alternatively, the motion intention of the object may also not be predicted, and only the first neural networks are used to iterate the fusion feature several times to predict the future trajectory of the object in the future time period. For example, the predicted future trajectory of the object may be acquired by decoding the second adjusted time-series location information and posture information. In this way, the trajectory prediction is implemented through multiple kinds of time-series location information and posture information, and thus even in the scenes where there are few observation points [even only one frame of observation data], or when the object suddenly accelerates, decelerates, or suddenly turns, the accuracy of predicting the future trajectory can still be guaranteed.”).
Abe and Zhang are considered to be analogous to the claimed invention because they are in the same field of radar-based posture determination. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Abe with the teachings of Zhang with the motivation of improving posture determination accuracy.
Regarding claims 2 and 10, Abe in view of Zhang teaches the radar point cloud-based posture determination system of claim 1 and the method of claim 9 respectively,
wherein the data collecting operation repeatedly acquires the 3D point cloud data for each frame with respect to the target at preset frame intervals, and the posture determining operation determines the posture of the target for each frame (Abe; para. 72, “Therefore, in one example, the status detection unit 1007 calculates the magnitude of change between the clustered point cloud contained in several frames acquired before the current frame, i.e., several frames acquired within a predetermined time [a second time, for example, 5 seconds] from the current time, and the clustered point cloud contained in the current frame, and if the magnitude is equal to or greater than a predetermined threshold, it determines that the bather has fallen.”).
Regarding claims 4 and 12, Abe in view of Zhang teaches the radar point cloud-based posture determination system of claim 1 and the method of claim 9 respectively,
wherein the data collecting operation collects the 3D point cloud data including a plurality of targets (Abe; para. 8, “The tracking method according to the present disclosure is configured to obtain the center of gravity of point cloud data obtained from radar waves reflected by a target, determine the horizontal position of the center of gravity, and discriminate the attitude of the target from the distribution of the point cloud data in at least one of the vertical and horizontal directions, and if the determined position and the discriminated attitude satisfy predetermined conditions, analyze the Doppler distribution of the target and judge the state of the target.”; Fig. 8, two human targets are detected),
the clustering operation generates a plurality of point clusters on the basis of the 3D point cloud data (Abe; para. 19, “The clustering processing unit 202 extracts reflection point clouds for each detected target from all reflection point clouds acquired from the radar 201, and performs clustering processing by grouping the extracted reflection point clouds into a cluster [group].”),
the feature calculating operation calculates statistical features for each of the plurality of point clusters (Abe; para. 24, “The storage unit 207 stores height information [information about the target] associated with a target to be identified [specific target]. In one example, the specific target is an individual identified by the tracking system 1. In another example, the specific target is an age group identified by the tracking system 1. In another example, the specific target is gender identified by the tracking system 1. The height information includes height features of the specific target.”; para. 99, “The tracking device disclosed herein includes a processing circuit that calculates a feature amount related to the vertical direction of the target based on one of point cloud data obtained from a radar wave reflected by a target, and identifies the target based on the feature amount and information related to the target that is associated with the feature amount.”), and
the posture determining operation determines a posture of the target on the basis of the statistical features calculated for each point cluster (Abe; para. 48, “Also, for example, after it is determined that the tracking target 301 is a specific target, it is possible to identify the state of the target by using the value L calculated by the height calculation unit 205 as the height of the tracking target.”).
Regarding claims 6 and 14, Abe in view of Zhang teaches the radar point cloud-based posture determination system of claim 1 and the method of claim 9 respectively,
wherein each feature among the predefined statistical features corresponds to a different posture of the target (Abe; para. 53, “There are also variations between individuals. Therefore, in the third embodiment, the determination unit 208 uses the movement speed of the tracking target 301 as a walking speed, and identifies the tracking target 301 based on at least one of features related to the walking speed of the tracking target 301, such as the walking speed, the average walking speed, and the variance of the walking speed, in addition to the height output by the height calculation unit 205 or the height feature output by the height feature calculation unit 206.”; para. 68, “The state detection unit 1007 analyzes the Doppler frequency distribution in the clustered point cloud, and if it determines that the movement of a part of the object [the bather's head] after the object [the bather] sits down is less than a predetermined threshold, it detects that the bather is in a state of drowning.”; height and Doppler speed values are statistical features and are associated with different postures/states).
Regarding claims 7 and 15, Abe in view of Zhang teaches the radar point cloud-based posture determination system of claim 1 and the method of claim 9 respectively,
wherein the posture determining operation classifies the posture of the target as one of a standing posture, a sitting posture, and a lying posture (Abe; para. 57, “For family members waiting in a location other than the bathroom, a system that notifies them of where the person is and what position they are in [standing or sitting] even in normal, safe conditions will provide a greater sense of security than a system that only reports the situations mentioned above [drowning, falls].”).
Regarding claims 8 and 16, Abe in view of Zhang teaches the radar point cloud-based posture determination system of claim 1 and the method of claim 9 respectively,
wherein the predefined statistical features include at least one of maximum values, minimum values, a height, and a width of the point cluster in each axis direction of coordinates of points included in the cluster in a 3D space (Abe; para. 19, “For example, the clustering processing unit 202 groups together reflection points for which the amount of change in distance relative to the amount of change in angle to the reflection point is equal to or less than a predetermined threshold.”; para. 26, “The determination unit 208 distinguishes or determines whether or not the tracking target is a specific target based on the height feature amount output from the height feature amount calculation unit 206 and the height information stored in the storage unit 207. […] The determination unit 208 determines whether or not the difference between the two feature amounts relating to height values is smaller than a threshold, for example, and if it is smaller, determines that the tracking target is a specific target. In the following description, for simplicity, determining or judging that a tracking target is a specific target will be simply referred to as identifying the tracking target.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC K HODAC whose telephone number is (571) 270-0123. The examiner can normally be reached M-Th 8-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VLADIMIR MAGLOIRE can be reached on (571) 270-5144. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC K HODAC/Examiner, Art Unit 3648
/VLADIMIR MAGLOIRE/Supervisory Patent Examiner, Art Unit 3648