DETAILED ACTION
1. This office action is in response to U.S. Patent Application No.: 18/727,856 filed on with effective filing date 1/10/2023. Claims 19-38 are pending.
Claim Rejections - 35 USC § 103
2. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
3. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
4. Claim(s) 19-20, 23-27, 29-30 & 33-37 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. GeometryMotion-Net: A Strong Two-Stream Baseline for 3D Action Recognition (IEEE, Dec 2021) in view of Kadam et al. US 2024/0236369 A9.
Per claims 19 & 29, Liu et al. a device, the device comprising: a processor configured to:
obtain a quantized point cloud associated with a frame, wherein the quantized point cloud comprises a current point (fig. 1, point cloud); determine a set of neighboring points associated with the current point of the quantized point cloud (page 4714, left hand column: middle section C, e.g. For the i -th point (i.e. the current point) in the initial virtual overall point cloud, we search k1 closest points from all NT points in the (3 + C)-dim feature space. Specifically, based on the features of these NT points);
determine a first feature associated with the current point, wherein the first feature is determined using a point-based neural network technique (page 4714, left hand column: middle section C, e.g. we implement the function h1(·) by using a non-linear neural network consisting
of a set of MLPs. ˜ f j denotes the intermediate feature of the j -th neighboring point);
Liu et al. explicitly fails to disclose the remaining claim limitation.
Kadam et al. however in the same field of endeavor teaches predict an offset associated with the current point based on the first feature (para: 29, e.g. code that may be configured to encode each reference 3D point cloud frame of the set of reference 3D point cloud frames 124 to generate a feature set associated with 3D points of a corresponding reference 3D point cloud frame of the set of reference 3D point cloud frames 124); and determine an updated quantized point cloud associated with the frame based on the current point and the predicted offset (para: 32, e.g. in training of the neural network, one or more parameters of each node of the neural network may be updated based on whether an output of the final layer for a given input (from the training dataset) matches a correct result in accordance with a loss function for the neural network).
Therefore, in view of disclosures by Kadam et al., it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Liu et al. and Kadam et al. in order to generate a current frame data associated with 3D points of the current 3D point cloud frame, where the current frame data is comprised with a first set of features associated with an occupancy of 3D points in the current 3D point cloud frame.
Per claims 20 & 30, Kadam et al. further teaches the device of claim 19, wherein the processor is further configured to: generate an upsampled point based on the quantized point cloud; and predict an offset associated with the upsampled point based on the first feature (para: 76, e.g. the first neural network predictor 112 may receive the feature set associated with 3D points of each reference 3D point cloud frame of the set of reference 3D point cloud frames (i.e., F(t−1) . . . F(t−N)) as an input).
Per claims 22 & 32, Kadam et al. further teaches the device of claim 19, wherein the point-based neural network technique uses a point- based representation of the set of neighboring points (para: 79, e.g. the sparse convolution operation may modify each feature based on the corresponding feature, a set of neighboring features of the corresponding feature, and the weight value used to weigh the corresponding feature and each of the neighboring features).
Per claims 23 & 33, Liu et al. further teaches the device of claim 22, wherein the point-based representation of the set of neighboring points is associated with three dimensional (3D) or k-dimensional (KD) locations of the set of neighboring points (page 4715, right hand column, e.g. we find a set of matching 3D points in the reference frame, and then apply the existing point cloud analysis methods (e.g., PointNet [40]) to extract the motion representation based on these 3D coordinate offset representations together with two geometry).
Per claims 24 & 34, Liu et al. further teaches the device of claim 19, wherein the processor is further configured to: deploy a point-based neural network, wherein the point-based neural network technique uses the point-based neural network technique (page 4714, left hand column: middle section C, e.g. we implement the function h1(·) by using a non-linear neural network consisting of a set of MLPs. ˜ f j denotes the intermediate feature of the j -th neighboring point).
Per claims 25 & 35, Kadam et al. further teaches the device of claim 19, wherein the first feature comprises information associated with intricate details of an object (para: 26, e.g. the server 106 may be configured use images and depth information of the objects to generate each 3D reference point cloud frame of the set of reference 3D point cloud frames 124).
Per claims 26 & 36, Liu et al. further teaches the device of claim 19, wherein the processor is further configured to: determine a second feature associated with an object, wherein the second feature is determined using a voxel-based neural network technique; and combine the first feature and the second feature into a combined feature, wherein the offset associated with the current point is further predicted based on the combined feature (page 4711, second column, 3rd para & page 4712, second column 3rd para, e.g. For effective and efficient 3D action recognition, in this work, we propose a simple and strong baseline method called GeometryMotion-Net to extract both geometry and motion features from point cloud sequences without using any voxelization operation, which can be jointly optimized in a fully end-to-end fashion).
Per claims 27 & 37, Liu et al. further teaches the device of claim 26, wherein the voxel-based neural network technique uses a voxelized version of the set of neighboring points, and wherein the voxel-based neural network technique uses a convolutional neural network (page 4711, second column, 3rd para & page 4712, second column 3rd para, e.g. For effective and efficient 3D action recognition, in this work, we propose a simple and strong baseline method called GeometryMotion-Net to extract both geometry and motion features from point cloud sequences without using any voxelization operation, which can be jointly optimized in a fully end-to-end fashion).
Allowable Subject Matter
5. Claims 21, 28, 31 & 38 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Akhtar et al. US 11,893,691 B1, e.g. a method, computer program, and computer system is provided for processing point cloud data. Quantized point cloud data including a plurality of voxels is received. An occupancy map is generated for the quantized point cloud corresponding to lost voxels during quantization from among the plurality of voxels. A point cloud is reconstructed from the quantized point cloud data based on populating the lost voxels.
Hazeghi et al. US 10,650,588 B2, e.g. a method for generating a three-dimensional model of an object, by a scanning system including a client-side device including: an acquisition system configured to capture images; and an interaction system including a display device and a network interface includes: capturing a plurality of images of the object by the acquisition system, the images being captured from a plurality of different poses of the acquisition system.
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRFAN HABIB whose telephone number is (571)270-7325. The examiner can normally be reached Mon-Th 9AM-7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 5712722988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Irfan Habib/Examiner, Art Unit 2485