DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Amendment filed 23 December, 2025 (hereinafter “the Amendment’) has been entered and considered. Claims 1-9 have been amended. Claims 1-9, all the claims pending in the application, are rejected. All modifications to the rejection set forth in the present action were necessitated by Applicants’ claim amendments; accordingly, this action is made final.
Response to Amendment
2. In view of the amendments to claims 1-9, the rejections below have been modified to address the new claim language.
112(b) Rejections
The rejection to claims 1-7 under 35 U.S.C. §112(b) are withdrawn in view of the amendment.
Remarks
On page 9 of the Amendment, the Applicant states “calculate (S102) a correction coefficient (FIGS. 8 and 9, St/Si) used for matching the three-dimensional size at each of the two or more positions to a known size (St), the correction coefficient being a ratio (St/Si) of the known size (St) to the three-dimensional size; correct (S103) each of the two or more positions (FIG. 10: L1 corrected to L2) by multiplying (e.g., equation (1)) each of the two or more positions with the correction coefficient;”, the Examiner agrees and understands the correction coefficient to be a scaling ratio utilizing a known size. Where the known size is used to correct the size of the 3D representation.
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over US 20210374990 A1: Naoyuki Miyashita et al., (herein after “Miyashita”) in view of “Real World Units: How to Scale and Enhance 3D Data in CloudCompare” by Tom Goskar, (herein after “Goskar”).
Regarding claim 1, A three-dimensional reconstruction device (Miyashita, §Abstract: “performs three-dimensional reconstruction processing of an object”), comprising a processor configured to execute a read program to at least, or logic circuits configured to at least:
acquire position-and-orientation information (Miyashita, §Abstract: “An image processing system, which estimates a position and a pose of a camera”) and two-dimensional coordinate information (Miyashita, P[0046]: “The feature point matching section 512 extracts image local features from each of the frames acquired by the image acquisition section 511, as two-dimensional feature points (hereinafter, referred to as 2D feature points)”),
wherein the position-and-orientation information indicates two or more different positions of a monocular camera and an orientation of the camera at each of the two or more positions (Miyashita, P[0045]: “The image acquisition section 511 acquires, in time series, photographed images of the pipe, such as endoscopic images of the pipe that are sequentially picked up in the endoscope system 1.”, and Miyashita, P[0067]: “the initialization section 513 selects the inputted two frames in which the 2D feature points are matched with each other, to perform initialization (FIG. 5, step S4). Specifically, the initialization section 513 sets the selected two frames as an initial image pair, to estimate the camera positions/poses of the initial image pair”),
wherein the two-dimensional coordinate information indicates two- dimensional coordinates of one or more points in each of two or more images of a subject acquired at the two or more positions by the camera (Miyashita, P[0067]: “the initialization section 513 selects the inputted two frames in which the 2D feature points are matched with each other, to perform initialization (FIG. 5, step S4). Specifically, the initialization section 513 sets the selected two frames as an initial image pair, to estimate the camera positions/poses of the initial image pair”), and
wherein the position-and-orientation information and the two- dimensional coordinate information are generated through three-dimensional reconstruction processing that uses the two or more images (Miyashita further generates the position-and-orientation information and the two-dimensional coordinate information through three-dimensional reconstruction in Fig. 5, S5, “3D RECONSTRUCITON”, where the information from the 3D reconstruction, which contains position and orientation information and 2D coordinate information, is generated and used for “conic shape detection”.);
acquire a three-dimensional size of the subject at each of the two or more positions (Miyashita, P[0049]: “The conic shape detection section 515 as an estimation section detects a conic shape using the group of 3D points including the new 3D points”, where the conic shape detection detects a difference in size of two circular ends.)
restore a three-dimensional shape of the subject by using the two or more corrected positions, the orientation at each of the two or more positions, and the two- dimensional coordinate information (Miyashita, P[0055]: “the bundle adjustment section 516 corrects the reconstructed 3D points and the camera positions/poses so as to minimize Equation (4) below.”, where the 3D points include the 2D information.).
Miyashita does not explicitly disclose using a scaling coefficient for matching sizing of a 3D projection to a known value, that is, Miyashita does not explicitly disclose “calculate a correction coefficient used for matching the three-dimensional size at each of the two or more positions to a known size, the correction coefficient being a ratio of the known size to the three-dimensional size;
correct each of the two or more positions by multiplying each of the two or more positions with the correction coefficient;”
However, Goskar discloses calculate a correction coefficient used for matching the three-dimensional size at each of the two or more positions to a known size, the correction coefficient being a ratio of the known size to the three-dimensional size on Pages 4-5, “The tool will give a distance in units and could be any number large or small. These are relative units and we need to scale the model to reflect real world units. In order to do this we need a scale factor.” ;
correct each of the two or more positions by multiplying each of the two or more positions with the correction coefficient is disclosed by Goskar on Pages 4-5, “The slab is 6.323777 units in length. The real distance is really 1.8 metres. So we need to scale 6.323777 to 1.8. This is done by simple division of the real world distance by the random unit distance to get a scale factor.
Distance divided by Units = Scale Factor
1.8 / 6.323777 = 0.284640018141057
Scale factor = 0.284640018141057
The maximum precision in CloudCompare is 8 decimal places so we will round the Scale Factor down to 0.28464002. Now that we have the Scale Factor we can move to actually scaling the model.”; Where the scale factor/correction coefficient is multiplied by the two or more positions, which represent an edge on the model, which correct the positions.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Miyashita to utilize a scaling factor, as taught by Goskar, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have provided the benefit of rendering 3D objects to actual, real-world scaling.
Regarding claim 2, wherein the processor is configured to execute the read program to, or the logic circuits are configured to calculate the three-dimensional size of the subject at each of the two or more positions by using the position-and-orientation information is disclosed by Miyashita in P[0049]: “The conic shape detection section 515 as an estimation section detects a conic shape using the group of 3D points including the new 3D points.”, where the group of 3D points provides the size of the subject and the positions are the positions at each end of the conic shape.
Regarding claim 3, wherein the processor is configured to execute the read program to, or the logic circuits are configured to:
acquire shape information indicating a three-dimensional shape of the subject (Miyashita, P[0049]: “The conic shape detection section 515 as an estimation section detects a conic shape using the group of 3D points including the new 3D points.”); and
calculate, as the three-dimensional size, a depth of the three-dimensional shape indicated by the shape information of the subject captured in a field of view of the camera at each of the two or more positions (Miyashita, Equation (1), P[0050]: “If a 3D point in the homogeneous coordinate system X=[x, y, z, l] .sup.T satisfies Equation (1) below, the 3D point is supposed to represent the surface of the cone.”, where z is the value of depth, therefore the depth of the shape is known, where from Fig. 4 it is shown that the depth is the distance between Vc(ẑ) and X(ẑ)).
Regarding claim 4, wherein the processor is configured to execute the read program to, or the logic circuits are configured to calculate the three-dimensional size based on a distance between two positions included in the two or more positions, where the determination of a size of an object requires the measured or given distance between a plurality of positions. Miyashita discloses this feature in P[0049]: “The conic shape detection section 515 as an estimation section detects a conic shape using the group of 3D points including the new 3D points”, where the conic shape detection detects a difference in size of two circular ends.)
Regarding claim 5, wherein the processor is configured to execute the read program to, or the logic circuits are configured to:
acquire shape information indicating a three-dimensional shape of the subject (Miyashita, P[0049]: “The conic shape detection section 515 as an estimation section detects a conic shape using the group of 3D points including the new 3D points”); and
calculate the three-dimensional size by using a predetermined three-dimensional shape that approximates the three-dimensional shape indicated by the shape information, where Miyashita uses the predetermined shape of a cylinder, as shown in Fig. 4, that approximates the 3D shape in P[0049]: “the present embodiment focuses on the fact that the object to be inspected is a pipe, that is, a cylinder having a constant inner diameter, to correct the positions of the 3D points by utilizing that the 3D points are located on the inner wall of the cylinder.”.
Regarding claim 6, wherein the processor is configured to execute the read program to, or the logic circuits are configured to acquire, from an input device (Miyashita, P[0030]: “an input I/F (interface)”), the three-dimensional size input into the input device (Miyashita, P[0079]: “Although it is preferable that a value given in advance as a set value by an inspector or the like is used as the cylinder radius value r to be used in the cylinder constrained conditions in the bundle adjustment, a value estimated by the conic shape detection section 515 may be used. An example of the value is the one estimated by the conic shape detection section 515 based on the distance between the reconstructed 3D-point coordinates and the axis of the cone (estimated center axis) at the early stage of the above-described 3D reconstruction processing.”).
Regarding claim 7, wherein the processor is configured to execute the read program to, or the logic circuits are configured to calculate the correction coefficient by calculating a ratio of a target value with the three-dimensional size at each of the two or more positions is disclosed by Goskar on Pages 4-5, “The slab is 6.323777 units in length. The real distance is really 1.8 metres. So we need to scale 6.323777 to 1.8. This is done by simple division of the real world distance by the random unit distance to get a scale factor.
Distance divided by Units = Scale Factor
1.8 / 6.323777 = 0.284640018141057
Scale factor = 0.284640018141057
The maximum precision in CloudCompare is 8 decimal places so we will round the Scale Factor down to 0.28464002. Now that we have the Scale Factor we can move to actually scaling the model.”; Where the scale factor/correction coefficient is multiplied by the two or more positions, which represent an edge on the model, which correct the positions.
Claims 8 and 9 recite features nearly identical to those recited in claim 1. Claims 8 and 9 are rejected for reasons analogous to those discussed above in conjunction with claim 1.
Conclusion
5. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TY M BEATTY whose telephone number is (703) 756-5370. The examiner can normally
be reached Mon-Fri: 8AM-4PM EST..
Examiner interviews are available via telephone, in-person, and video conferencing using a
USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use
the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor,
Gregory Morse can be reached on (571) 272 - 3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from
Patent Center. Unpublished application information in Patent Center is available to registered users. To
file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit
https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and
https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional
questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like
assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA)
or 571-272-1000.
/TY MITCHELL BEATTY/Examiner, Art Unit 2663
/GREGORY A MORSE/ Supervisory Patent Examiner, Art Unit 2698