DETAILED ACTIONS
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim this application being a National Stage of the International Application No. PCT/KR2022/012457, filed on August 19, 2022, and benefit of foreign priority from Korean Patent Application No. KR10-2021-0109294 filed on August 19, 2021.
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 12/15/2024, 05/28/2024, and 07/17/2025 were reviewed and the listed references were noted.
Drawings
The 4-page drawings have been considered and placed on record in the file.
Status of Claims
Claims 1-13 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim do not fall within at least one of the four categories of patent eligible subject matter because “computer program product” is a product that does not have a physical or tangible form and is considered “software per se”.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 6, 7-9, and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Sinko et al., "3D Registration of the Point Cloud Data Using ICP Algorithm in Medical Image Analysis" (2018), hereinafter referred to as Sinko, in view of Hong et al., "Probabilistic Normal Distributions Transform Representation for Accurate 3D Point Cloud Registration" (2017), hereinafter referred to as Hong.
[Claim 1]
Sinko discloses a method for matching a point cloud and a CT image (Sinko, Fig. 2), comprising:
obtaining target point cloud data from a surface image of an object photographed by a 3D camera (Sinko, Fig. 2, target model, Fig. 5, point cloud data of input data A is obtained using 3D reconstruction, the 3D camera could be CT or MRI, the object is the skull, Section V, “For the ICP registration we need to obtain two point clouds (PCs) from input CT/MRI data.”);
matching (Sinko, Fig. 2, Registration of target model and test model, Section V, “For the ICP registration we need to obtain two point clouds (PCs) from input CT/MRI data.”) the obtained target point cloud data (Sinko, Fig. 2, target model) with reference point cloud data (Fig. 2, test model) by using (Sinko, Section IV.A, “Firstly, the ICP algorithm is based on looking for pairs of closest points between two sets. Secondly, an estimate of optimal rigid transformation that aligns two sets of data is created. Finally, a solid transformation is applied to points of scenic data. The procedure is repeated until convergence is achieved”), wherein the reference point cloud data refers to data obtained from a previous point cloud of the object or a CT image of the object (Sinko, Fig. 2, test model, Fig. 5, point cloud data of input data B is obtained using 3D reconstruction, the 3D camera could be CT or MRI, the object is the skull, Section V, “For the ICP registration we need to obtain two point clouds (PCs) from input CT/MRI data.”); and
providing the transformation information between the target point cloud data and the reference point cloud data, on the basis of the matching (Sinko, Fig. 8, apply transformation, Section V, “The proposed registration of 3D models based on ICP method is described in Fig. 8. Once we have performed rigid registration, we can take three optional steps. The first is to check the point cloud alignment with the pctransform function, as the moving cloud point transformation is detected. The second step is to join two point clouds into one object via the pcmerge function. This step is mostly used to reconstruct 3D scenes or to scan the scene. And last one is to apply the transformation matrix to moving point cloud and again show the differences between fixed and aligned PC (see Fig. 9).”).
Sinko does not explicitly disclose matching the obtained target point cloud data with reference point cloud data by using normal distribution transform.
However, Hong teaches matching the obtained target point cloud data with reference point cloud data by using normal distribution transform (Hong, Section I, “The pose variation can be estimated by registering a pair of point clouds collected from a range finder at a particular time interval. As a registration algorithm for this, the variants of iterative closest point (ICP) and normal distributions transform (NDT) are commonly used.”, “The recent variant called distribution-to-distribution NDT (d2d-NDT) substantially improves the processing speed by registering new NDT to reference NDT”. Section II, “NDT can be used to represent the scans which are registered to estimate the rigid transformation parameter Θ between a pair of scans. Since the rigid transformation T(Θ) is SE(3), and it is composed of the rotation matrix R ∈ SO(3) and translation vector t = [ tx ty tz ] T ∈ R 3 , Θ can be represented as [ tx ty tz θx θy θz ] T , where θx, θy, θz are the rotation angles about x, y, z axes, respectively. Θˆ can be initialized to the initial guess Θˆ 0. If no initial guess, it is initialized to zero vector 0.”).
Sinko and Hong are both considered to be analogous to the claimed invention because they are in the same field of point cloud registration. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Sinko to incorporate the teachings of Hong of matching the obtained target point cloud data with reference point cloud data by using normal distribution transform. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have to improve the accuracy of point cloud registration by using the probabilities of point samples (Hong, Abstract).
[Claim 2]
The combination of Sinko in view of Hong discloses the method of claim 1 (Sinko, Fig. 2), wherein the matching (Sinko, Fig. 2, Registration of target model and test model) includes:
selecting point cloud data corresponding to a region of interest (Rol) from each of the reference point cloud data and the target point cloud data (Sinko, Fig. 8, the point cloud data of the skulls, the skull is considered the RoI, Section V, “The second step is to join two point clouds into one object via the pcmerge function. This step is mostly used to reconstruct 3D scenes or to scan the scene.”);
obtaining a probability density function of the selected reference point cloud data (Hong, Section I, “we model the sensor uncertainty and assign a probability density function (pdf) to each point sample”); and
calculating information about a matching degree of the selected target point cloud data and the obtained probability density function using a transformation estimate (Hong, Section I, “To overcome the problem, we model the sensor uncertainty and assign a probability density function (pdf) to each point sample. The mean and covariance of the point set in a cell are modified by using the pdfs, and the modification allows all of the occupied cells generate NDT. As the results, it improves not only the representation but also the registration performance of NDT. We also derive that the p2d-NDT and the d2d-NDT are generalized through the modification”).
The proposed combination as well as the motivation for combining the Sinko and Hong references presented in the rejection of Claim 1, apply to Claim 2 and are incorporated herein by reference. Thus, the method in Claim 2 is met by Sinko and Hong.
[Claim 3]
The combination of Sinko in view of Hong discloses the method of claim 2 (Sinko, Fig. 2), wherein the selecting of point cloud data corresponding to the Rol includes:
removing data having an outlier from each of the reference point cloud data and the target point cloud data (Sinko, Section V, “When all these steps are done we have got two separate PCs. One is Fixed PC and another is moving PC. In the next steps, we need to prepare PCs for registration. This process is visualized in PCs registration workflow in Fig. 8. First step is to remove outliers and noise in both PCs with function pcdenoise”, Fig. 8); and
selecting point cloud data corresponding to the Rol, from the reference point cloud data and the target point cloud data from which the data having an outlier is removed (Sinko, Fig. 8, the point cloud of the skull corresponds to the RoI, Section V, The second step is to join two point clouds into one object via the pcmerge function. This step is mostly used to reconstruct 3D scenes or to scan the scene.).
[Claim 6]
The combination of Sinko in view of Hong discloses the method of claim 1 (Sinko, Fig. 2), further comprising: extracting a contour from a 3D image of the object by performing segmentation or edge extraction on the CT image of the object (Sinko, Fig. 5, Sinko teaches using image segmentation on the CT image to extract the contour of the skull); and obtaining reference point cloud data of the object from the extracted contour (Sinko, Fig. 5, the image segmentation of the CT image is converted into point cloud).
Claims 7-9 and 12 are rejected for similar reasons as those described in claims 1-3 and 6. The additional elements in Claims 7-9 and 12 (the combination of Sinko in view of Hong) discloses includes: an apparatus (Sinko, Abstract, “computer”) for matching a point cloud and a CT image (Sinko, Fig. 2), comprising: an input unit (Sinko, Fig. 2, input units are test and target models); an output unit (Sinko, Fig. 8, the aligned point cloud); a memory (Sinko, Abstract, “computer” which comprises a memory and processor); and at least one processor (Sinko, Abstract, “computer” which comprises a memory and processor), wherein the at least one processor executes an instruction (Sinko, Fig. 8) stored in the memory (Sinko, Abstract, computer which comprises a memory and processor). The proposed combination as well as the motivation for combining the Sinko and Hong references presented in the rejection of Claim 1, apply to Claims 7-9 and 12 and are incorporated herein by reference. Thus, the apparatus in Claims 7-9 and 12 is met by Sinko and Hong.
Claim 13 is rejected for similar reasons as those described in claim 1. The additional elements in Claim 13 (the combination of Sinko in view of Hong) discloses includes: computer program product (Sinko, Fig. 2 and Fig. 8, Abstract, “computer” which comprises a memory and processor) including a recording medium (Sinko, Fig. 2 and Fig. 8, Abstract, “computer” which comprises a memory and processor) in which a program to perform a method) for matching a point cloud and a CT image is stored (Sinko, Fig. 2). The proposed combination as well as the motivation for combining the Sinko and Hong references presented in the rejection of Claim 1, apply to Claim 13 and are incorporated herein by reference. Thus, the computer program product in Claim 13 is met by Sinko and Hong.
Claims 4-5 and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Sinko in view of Hong in further view of Bellekens et al., “A Survey of Rigid 3D Pointcloud Registration Algorithms” (2014), hereinafter referred to as Bellekens.
[Claim 4]
The combination of Sinko in view of Hong discloses the method of claim 2 (Sinko, Fig. 2).
The combination of Sinko in view of Hong does not explicitly disclose wherein in the providing of transformation information, 6 degree of freedom (6DoF) transformation information of the selected target point cloud data based on the selected reference point cloud data is provided, on the basis of calculated information about the matching degree.
However, Bellekens teaches wherein in the providing of transformation information (Bellekens, Section IV.C, “The input of the ICP algorithm consists of a source point cloud and a target point cloud. Point correspondences between these point clouds are defined based on a nearest neighbor approach or a more elaborate scheme using geometrical features or color information. SVD, as explained in the previous section, is used to obtain an initial estimate of the affine transformation matrix that aligns both point clouds. After registration, this whole process is repeated by removing outliers and redefining the point correspondences..”), 6 degree of freedom (6DoF) (Bellekens, Section I, “These registration algorithms can be classified coarsely into rigid and non-rigid approaches. Rigid approaches assume a rigid environment such that the transformation can be modeled using only 6 Degrees Of Freedom (DOF).”) transformation information of the selected target point cloud data based on the selected reference point cloud data is provided, on the basis of calculated information about the matching degree (Bellekens, Section IV, “Both rigid and non-rigid registration algorithms can be further categorized into pairwise registration algorithms and multiview registration methods. Pairwise registration algorithms calculate a rigid transformation between two subsequent point clouds while the multi-view registration process takes multiple point clouds into account to correct for the accumulated drift that is introduced by pairwise registration methods”, “we discuss five widely used rigid registration algorithms. Each of these methods tries to estimate the optimal rigid transformation that maps a source point cloud on a target point cloud. Both PCA alignment and singular value decomposition are pairwise registration methods based on the covariance matrices and the cross correlation matrix of the pointclouds, while the ICP algorithm and its variants are based on iteratively minimizing a cost function that is based on an estimate of point correspondences between the pointclouds.”).
Sinko, Hong, and Bellekens are all considered to be analogous to the claimed invention because they are in the same field of point cloud registration. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method as taught by Sinko and Hong to incorporate the teachings of Bellekens wherein in the providing of transformation information, 6 degree of freedom (6DoF) transformation information of the selected target point cloud data based on the selected reference point cloud data is provided, on the basis of calculated information about the matching degree. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. The motivation for the proposed modification would have to improve the accuracy of point cloud registration.
[Claim 5]
The combination of Sinko in view of Hong in further view of Bellekens discloses the method of claim 4 (Sinko, Fig. 2), wherein the providing of the transformation information (Sinko, Fig. 8, apply transformation, Section V, “The proposed registration of 3D models based on ICP method is described in Fig. 8. Once we have performed rigid registration, we can take three optional steps. The first is to check the point cloud alignment with the pctransform function, as the moving cloud point transformation is detected. The second step is to join two point clouds into one object via the pcmerge function. This step is mostly used to reconstruct 3D scenes or to scan the scene. And last one is to apply the transformation matrix to moving point cloud and again show the differences between fixed and aligned PC (see Fig. 9).”) includes:
providing point cloud data obtained by transforming the selected target point cloud data according to 6DoF transformation information (Bellekens, Section I, “These registration algorithms can be classified coarsely into rigid and non-rigid approaches. Rigid approaches assume a rigid environment such that the transformation can be modeled using only 6 Degrees Of Freedom (DOF).”, Section IV.C, “The input of the ICP algorithm consists of a source point cloud and a target point cloud. Point correspondences between these point clouds are defined based on a nearest neighbor approach or a more elaborate scheme using geometrical features or color information. SVD, as explained in the previous section, is used to obtain an initial estimate of the affine transformation matrix that aligns both point clouds. After registration, this whole process is repeated by removing outliers and redefining the point correspondences.”).
The proposed combination as well as the motivation for combining the Sinko, Hong, and Bellekens references presented in the rejection of Claim 1, apply to Claim 2 and are incorporated herein by reference. Thus, the method in Claim 2 is met by Sinko, Hong, and Bellekens.
Claims 10-11 are rejected for similar reasons as those described in claims 4-5. The additional elements in Claims 10-11 (the combination of Sinko in view of Hong in further view of Bellekens) discloses includes: an apparatus (Sinko, Abstract, “computer”) for matching a point cloud and a CT image (Sinko, Fig. 2), comprising: an input unit (Sinko, Fig. 2, input units are test and target models); an output unit (Sinko, Fig. 8, the aligned point cloud); a memory (Sinko, Abstract, “computer” which comprises a memory and processor); and at least one processor (Sinko, Abstract, “computer” which comprises a memory and processor), wherein the at least one processor executes an instruction (Sinko, Fig. 8) stored in the memory (Sinko, Abstract, computer which comprises a memory and processor). The proposed combination as well as the motivation for combining the Sinko, Hong, and Bellekens references presented in the rejection of Claim 4, apply to Claims 10-11 and are incorporated herein by reference. Thus, the apparatus in Claims 10-11 is met by Sinko, Hong, and Bellekens.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Schollemann et al., (previously cited in an IDS filed by applicant) “An Anatomical Thermal 3D Model in Preclinical Research: Combining CT and Thermal Images” - discloses registering 3D point cloud with a CT image.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DENISE G ALFONSO whose telephone number is (571)272-1360. The examiner can normally be reached Monday - Friday 7:30 - 5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DENISE G ALFONSO/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662