Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/07/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The disclosure is objected to because of the following informalities:
The specification introduces the acronym “CAD” in par. 3 without spelling out its full term. Appropriate correction is required.
Claim Objections
Claim 1 is objected to because of the following informalities:
Claim 1 recites “a identification shape unit” in line 2. The examiner believes it should recite “an identification unit” instead. Appropriate correction is required.
Claim 1 introduces the acronym “CAD” in line 9 without spelling out its full term. Appropriate correction is required.
Claim 9 recites “A computer-readable storage medium that stores an inspection assistance computer-readable recording medium storing a program” in lines 1-2. The examiner suggests amending it to recite “A computer-readable storage medium that stores an inspection assistance
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“identification shape unit” in claim 1;
“defect detection unit” in claim 1;
“coordinate transformation parameter estimation unit” in claim 1;
“three-dimensional CAD model position change unit” in claim 1;
“two-dimensional simulated image extraction unit” in claim 1;
“depiction unit” in claim 1;
“report creation unit” in claim 6; and
“coordinate transformation parameter correction unit” in claim 7.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the three-dimensional CAD model" in lines 8-9. There is insufficient antecedent basis for this limitation in the claim. For the prior art purposes, the limitation has been interpreted “a three-dimensional CAD model.”
Claims 2-7 are rejected under the same reason for their dependencies to claim 1.
Claim 8 recites the limitation "the three-dimensional CAD model" in line 8. There is insufficient antecedent basis for this limitation in the claim. For the prior art purposes, the limitation has been interpreted “a three-dimensional CAD model.”
Claim 9 recites the limitation "the three-dimensional CAD model" in line 10. There is insufficient antecedent basis for this limitation in the claim. For the prior art purposes, the limitation has been interpreted “a three-dimensional CAD model.”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6 and 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over JP patent application publication no. 2021-021669 to Shota et al. (hereinafter Shota) in view of us patent application publication no. 2024/0193852 to Sakamoto.
For claims 1 and 8, Shota as applied teaches an inspection assistance system comprising:
a identification shape unit that recognizes a shape of an inspection target object included in a two-dimensional photographed image based on the two-dimensional photographed image obtained by capturing the inspection target object with an imaging device (see, e.g., par. 47 and FIG. 1, which teach identifying a shape of the inspection object in a 2D real image acquired by an image acquisition unit);
a defect detection unit to detect a defect of the inspection target object included in the two-dimensional photographed image (see, e.g., par. 56 and FIG. 1, which teach detecting a defect of the object in the 2Dreal image);
a coordinate transformation parameter estimation unit to estimate a coordinate transformation parameter to transform a first coordinate system corresponding to the three-dimensional CAD model into a second coordinate system corresponding to a viewpoint of the imaging device that has captured the two-dimensional photographed image based on the shape recognized by the identification shape unit and on a three-dimensional CAD model of the inspection target object (see, e.g., par. 57 and FIG. 1, which teach extracting a 2D simulated image corresponding to the 2D real image from the 3D CAD model after the part corresponding to the reference parts of the inspection objection in the real image is recognized/identified);
a three-dimensional CAD model position change unit to modify a position and a direction of viewpoint information of the three-dimensional CAD model of the inspection target object by using the coordinate transformation parameter (see, e.g., pars. 57 and 62-65 and FIGS. 1 and 4-6, which teach extracting a 2D simulated image corresponding to the 2D real image from the 3D CAD model after the part corresponding to the reference parts of the inspection objection in the real image is recognized/identified);
a two-dimensional simulated image extraction unit to extract a two-dimensional simulated image corresponding to the two-dimensional photographed image from the three- dimensional CAD model after the viewpoint information is modified (see, e.g., pars. 57 and 62-65 and FIGS. 1 and 4-6, which teach extracting a 2D simulated image corresponding to the 2D real image from the 3D CAD model after the part corresponding to the reference parts of the inspection objection in the real image is recognized/identified); and
a depiction unit to depict a defect image illustrating the defect detected by the defect detection unit on the three-dimensional CAD model by fitting the two-dimensional photographed image including the defect image to the two-dimensional simulated image (see, e.g., pars. 58-59 and 71 and FIGS. 1 and 4, which teach depicting the defect image on the 3D CAD model by adjusting the defect image contained in the 2D real image to fit the 2D simulated image ).
Shota teaches extracting, from the 3d CAD model, a simulated image that corresponds to the real image and contains the reference parts of the real image (see, e.g., pars. 57 and 62-65 and FIGS. 1 and 4-6). The examiner believes that the cited portion of Shota may suggest estimating “a coordinate transformation parameter to transform a first coordinate system corresponding to the three-dimensional CAD model into a second coordinate system corresponding to a viewpoint of the imaging device that has captured the two-dimensional photographed image based on the shape recognized by the identification shape unit and on a three-dimensional CAD model of the inspection target object” and modifying “a position and a direction of viewpoint information of the three-dimensional CAD model of the inspection target object by using the coordinate transformation parameter.”
However, for the interest of compact prosecution, the examiner relies on Sakamoto in the analogous art that explicitly teaches the quoted limitations. Sakamoto teaches:
a coordinate transformation parameter estimation unit to estimate a coordinate transformation parameter to transform a first coordinate system corresponding to the three-dimensional CAD model into a second coordinate system corresponding to a viewpoint of the imaging device that has captured the two-dimensional photographed image based on the shape recognized by the identification shape unit and on a three-dimensional CAD model of the inspection target object (see, e.g., pars. 156-165 and FIGS. 2 and 5-7 of Sakamoto, which teach estimating the relative position and posture of the camera between the selected frame and the reference frame to transform a set of coordinates corresponding to the 3d model into another set of coordinates corresponding to the viewpoint of the camera that has captured the reference frame based on the 3d model and the 3d shape data therein);
a three-dimensional CAD model position change unit to modify a position and a direction of viewpoint information of the three-dimensional CAD model of the inspection target object by using the coordinate transformation parameter (see, e.g., pars. 166-167 and FIG. 2 of Sakamoto, which teach modifying a position and posture of viewpoint information of the 3d model by associating the coordinates of the target image with the 3d model based on the position and posture of the camera that acquired the reference image and the condition, i.e., the internal parameter and the distortion correction parameter, for generating the 3d model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the viewpoint of the 3d CAD model as taught by Sakamoto because doing so would yield predictable results of allowing an accurate estimation of the position of the defect when the viewpoints are modified (see, e.g., par. 166 and MPEP 2143(I)(D)).
For claim 2, Shota as applied teaches that the coordinate transformation parameter estimation unit specifies in advance a plurality of first reference portions of the inspection target object in the two-dimensional photographed image based on the shape recognized by the identification shape unit (see, e.g., pars. 47-51 and FIGS. 1-2, which teach detecting characteristic parts of the inspection object as the reference parts).
Shota does not explicitly teach estimating the coordinate transformation parameter such that a plurality of second reference portions registered in advance in the three-dimensional CAD model coincide with the plurality of first reference portions. Sakamoto in the analogous art teaches estimating the relative positions and postures of the camera with respect to the reference frame such that the points in the 3d model coincide with the points of the reference frame (see, e.g., pars. 164-165 and FIGS. 6-7 of Sakamoto).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shota to use coinciding reference parts as taught by Sakamoto because doing so would yield predictable results of allowing an accurate estimation of the position of the defect when the viewpoints are modified (see, e.g., par. 166 of Sakamoto and MPEP 2143(I)(D)).
For claim 3, while Shota does not explicitly teach, Sakamoto in the analogous art teaches that the coordinate transformation parameter includes an external parameter for defining a position and a posture of the imaging device in the first coordinate system (see, e.g., pars. 156-165 and FIGS. 2 and 5-7 of Sakamoto, which teach estimating the relative position and posture of the camera between the selected frame and the reference frame).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the viewpoint of the 3d CAD model as taught by Sakamoto because doing so would yield predictable results of allowing an accurate estimation of the position of the defect when the viewpoints are modified (see, e.g., par. 166 of Sakamoto and MPEP 2143(I)(D)).
For claim 4, while Shota does not explicitly teach, Sakamoto in the analogous art teaches that the coordinate transformation parameter further includes an internal parameter relating to the imaging device (see, e.g., pars. 128 and 166-167 and FIG. 2 of Sakamoto, which teach modifying a position and posture of viewpoint information of the 3d model based the condition, i.e., the internal parameter of the camera).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the viewpoint of the 3d CAD model as taught by Sakamoto because doing so would yield predictable results of allowing an accurate estimation of the position of the defect when the viewpoints are modified (see, e.g., par. 166 of Sakamoto and MPEP 2143(I)(D)).
For claim 5, Shota in view of Sakamoto teaches that the depiction unit adjusts a position and a dimension of the depicted defect image by performing plane transformation of the two-dimensional photographed image including the defect image based on a result of comparing the two-dimensional photographed image with the two-dimensional simulated image (see, e.g., pars. 58-59 and FIG. 1 of Shota, which teach adjusting the position and dimension of the depicted defect image by performing an affine transformation on the defect image included in the 2D real image based on a result of comparing the two-dimensional real image and the 2D simulated image).
For claim 6, Shota in view of Sakamoto teaches:
a report creation unit that derives a three-dimensional position and dimension of the defect in the inspection target object based on the defect image projected onto the three-dimensional CAD model from dimensional data of the three-dimensional CAD model, and that creates a report including a derivation result of the position and the dimension (see, e.g., pars. 59-61 and FIG. 1 of Shota, which teach deriving the three-dimensional position and dimension of the defect in the inspection object based on the defect image depicted on the 3D CAD model from the dimension data of the 3D CAD model, and creating a report including the derivation result of the position and the dimension).
For claim 9, Shota as applied teaches a computer-readable storage medium that stores an inspection assistance computer-readable recording medium storing a program that causes a computer (see, e.g., pars. 33-34, 39, 44, and 92 and FIG. 1) to execute:
a step of recognizing a shape of an inspection target object included in a two-dimensional photographed image based on the two-dimensional photographed image obtained by capturing the inspection target object with an imaging device (see, e.g., par. 47 and FIG. 1, which teach identifying a shape of the inspection object in a 2D real image acquired by an image acquisition unit);
a step of detecting a defect of the inspection target object included in the two-dimensional photographed image (see, e.g., par. 56 and FIG. 1, which teach detecting a defect of the object in the 2Dreal image);
a step of estimating a coordinate transformation parameter to transform a first coordinate system corresponding to the three-dimensional CAD model into a second coordinate system corresponding to a viewpoint of the imaging device that has captured the two-dimensional photographed image, based on the shape and a three-dimensional CAD model of the inspection target object (see, e.g., pars. 57 and 62-65 and FIGS. 1 and 4-6, which teach extracting a 2D simulated image corresponding to the 2D real image from the 3D CAD model after the part corresponding to the reference parts of the inspection objection in the real image is recognized/identified);
a step of modifying a position and a direction of viewpoint information of the three- dimensional CAD model of the inspection target object by using the coordinate transformation parameter (see, e.g., pars. 57 and 62-65 and FIGS. 1 and 4-6, which teach extracting a 2D simulated image corresponding to the 2D real image from the 3D CAD model after the part corresponding to the reference parts of the inspection objection in the real image is recognized/identified);
a step of extracting a two-dimensional simulated image corresponding to the two- dimensional photographed image from the three-dimensional CAD model after the viewpoint information is modified (see, e.g., pars. 57 and 62-65 and FIGS. 1 and 4-6, which teach extracting a 2D simulated image corresponding to the 2D real image from the 3D CAD model after the part corresponding to the reference parts of the inspection objection in the real image is recognized/identified); and
a step of depicting a defect image on the three-dimensional CAD model by fitting the two-dimensional photographed image including the defect image illustrating the defect to the two-dimensional simulated image (see, e.g., pars. 58-59 and 71 and FIGS. 1 and 4, which teach depicting the defect image on the 3D CAD model by adjusting the defect image contained in the 2D real image to fit the 2D simulated image ).
Shota teaches extracting, from the 3d CAD model, a simulated image that corresponds to the real image and contains the reference parts of the real image (see, e.g., pars. 57 and 62-65 and FIGS. 1 and 4-6). The examiner believes that the cited portion of Shota may suggest estimating “a coordinate transformation parameter to transform a first coordinate system corresponding to the three-dimensional CAD model into a second coordinate system corresponding to a viewpoint of the imaging device that has captured the two-dimensional photographed image based on the shape recognized by the identification shape unit and on a three-dimensional CAD model of the inspection target object” and modifying “a position and a direction of viewpoint information of the three-dimensional CAD model of the inspection target object by using the coordinate transformation parameter.”
However, for the interest of compact prosecution, the examiner relies on Sakamoto in the analogous art that explicitly teaches the quoted limitations. Sakamoto teaches:
a coordinate transformation parameter estimation unit to estimate a coordinate transformation parameter to transform a first coordinate system corresponding to the three-dimensional CAD model into a second coordinate system corresponding to a viewpoint of the imaging device that has captured the two-dimensional photographed image based on the shape recognized by the identification shape unit and on a three-dimensional CAD model of the inspection target object (see, e.g., pars. 156-165 and FIGS. 2 and 5-7 of Sakamoto, which teach estimating the relative position and posture of the camera between the selected frame and the reference frame to transform a set of coordinates corresponding to the 3d model into another set of coordinates corresponding to the viewpoint of the camera that has captured the reference frame based on the 3d model and the 3d shape data therein);
a three-dimensional CAD model position change unit to modify a position and a direction of viewpoint information of the three-dimensional CAD model of the inspection target object by using the coordinate transformation parameter (see, e.g., pars. 166-167 and FIG. 2 of Sakamoto, which teach modifying a position and posture of viewpoint information of the 3d model by associating the coordinates of the target image with the 3d model based on the position and posture of the camera that acquired the reference image and the condition, i.e., the internal parameter and the distortion correction parameter, for generating the 3d model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the viewpoint of the 3d CAD model as taught by Sakamoto because doing so would yield predictable results of allowing an accurate estimation of the position of the defect when the viewpoints are modified (see, e.g., par. 166 of Sakamoto and MPEP 2143(I)(D)).
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shota in view of Sakamoto and further in view of us patent application publication no. 2022/0383456 to Gulati et al. (hereinafter Gulati).
For claim 7, while Shota in view of Sakamoto does not explicitly teach, Gulati in the analogous art teaches correcting the coordinate transformation parameter through noise removal using machine learning (see, e.g., pars. 62-64 of Gulati, which teach performing a denoise function using a deep learning model before performing image transformation).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shota in view of Sakamoto to denoise before transforming as taught by Gulati because doing so would yield predictable results of allowing providing a better quality transformed image (see, e.g., par. 63 of Gulati and MPEP 2143(I)(D)).
Additional Citations
The following table lists several references that are relevant to the subject matter claimed and disclosed in this Application. The references are not relied on by the Examiner, but are provided to assist the Applicant in responding to this Office action.
Citation
Relevance
George et al. (us patent app. Pub. 2011/0267428)
Describes a system and method for mapping a two-dimensional image onto a three-dimensional model. In one embodiment, a system includes a turbine comprising multiple components in fluid communication with a working fluid. The system also includes an imaging system in optical communication with at least one component. The imaging system is configured to receive a two-dimensional image of the at least one component during operation of the turbine, and to map the two-dimensional image onto a three-dimensional model of the at least one component to establish a composite model.
Komatsu (us patent app. Pub. 2015/0029180)
Describes techniques for designating a position in a virtual space. In one embodiment, an information processing device includes: a memory; and a processor coupled to the memory and configured to: calculate, based on a figure of a reference object recognized from a first input image, position information with respect to the reference object, the positional information indicating an image-capturing position of the first input image, and generate setting information in which display data is associated with the position information with respect to the reference object as a display position of the display data, the display data being displayed based on the setting information when the reference object is recognized from a second input image different from the first input image.
Tomonori et al. (jp patent app. Pub. 2015-170050)
Describes an object position and orientation measurement apparatus, a position and orientation measurement method, and a computer program, and more particularly a technique for measuring the position and orientation of an object whose three-dimensional shape is known. In one embodiment, a position attitude measurement device includes first acquisition means for acquiring a two-dimensional image of an object of a measurement object, second acquisition means for acquiring three-dimensional shape data representing the shape of the object, storage means for storing a shape model of the object, first calculation means for calculating the position and attitude of a principal plane of the object as a first position and attitude of the object on the basis of the three-dimensional shape data of the object acquired by the second acquisition means and the shape model of the object stored in the storage means, and second calculation means for calculating a second position and attitude on a three-dimensional space of the object by collating the two-dimensional image acquired by the first acquisition means with the shape model on the basis of the first position and attitude calculated by the first calculation means.
Table 1
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See table 1 and form 892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WOO C RHIM/Examiner, Art Unit 2676