DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement is made of Applicant’s claim of priority from Foreign Application No. JP2021-039417, filed March 11, 2021 and PCT Application No. PCT/JP2021/047102, filed December 20, 2021.
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on December 12, 2025 was reviewed and the listed references were noted.
Status of Claims
Claims 1-11 and 13-16 are pending. Claim 12 has been cancelled.
Response to Arguments
Applicant's arguments filed December 15, 2024 have been fully considered but are moot because of the new grounds of rejection, as necessitated by the amendments, presented in the sections below. Applicant argues that the previously proposed references do not teach the newly added limitations. However, in an analogous field of endeavor, the newly presented Chui reference teaches an attention degree indicating that objects overlap in the three-dimensional model and acquiring a feature value (i.e., suppressed/emphasized object) that is corrected based on the overlap regions of the three-dimensional model (see Chui, Para. [0051]). Therefore, the 35 USC 103 rejection of the claims is upheld, and consequently, THIS ACTION IS FINAL.
Status of Claims
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are “a first generation unit configured to generate…”, “a setting unit configured to set…”, “a feature value acquisition unit configured to acquire…”, “a second generation unit configured to generate…”, “an estimation unit configured to estimate…”, “a determination unit configured to determine…” and “an updating unit configured to update…”. in claims 1-11.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 8, 10, 13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Chui et al. (US 2021/0118199 A1).
Regarding claim 1, Konishi teaches a template generation device for generating a template that indicates a feature value of a predetermined object (Konishi, Para. [0054], the template creation unit creates a template for each viewpoint based on a quantized normal direction feature amount) and is used by a collation device for collating the template with a measurement image representing a result of measuring a measurement range including the predetermined object (Konishi, Para. [0061], the template matching unit searches for the position of the object in an input image (i.e., measurement image) based on a template registered in the template DB and a quantized normal direction feature amount acquired by the normal quantization unit, and acquires one or more collation results), the template generation device comprising:
a memory configured to store computer-executable instructions (Konishi, Para. [0056], processing of each constituent element is realized by the CPU 110 reading and executing a program stored in the hard disk 114 or the memory card 14); and
a processor configured to execute the computer-executable instructions stored in the memory (Konishi, Para. [0056], processing of each constituent element is realized by the CPU 110 reading and executing a program stored in the hard disk 114 or the memory card 14) to implement:
a second generation unit configured to generate the template indicating the feature value corresponding to the projection image (Konishi, Para. [0054], the template creation unit creates a template for each viewpoint based on a quantized normal direction feature amount acquired by the normal vector quantization unit. Note that a template can include any number of feature amounts besides a quantized normal direction feature amount).
Although Konishi teaches creating a distance image of an object viewed from predetermined viewpoints that have been set for the object, using three-dimensional data acquired by the three-dimensional data acquisition unit (Konishi, Para. [0066]) and calculating normal vectors of feature points of the object viewed from the predetermined viewpoint that has been set for the object, based on the distance images of the respective viewpoints (Konishi, Para. [0067]), Konishi does not explicitly teach “a first generation unit configured to generate a projection image representing the predetermined object as a two-dimensional image, based on a three-dimensional model of the predetermined object”. However, in an analogous field of endeavor, Suzuki teaches a moving amount on an image surface during an exposure time of a feature of projected local edge is used, the moving amount being detected from an image obtained by projecting the geometric model onto the two-dimensional image (Suzuki, Col. 8, lines 35-52).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi with the teachings of Suzuki by including generating a projection image by projecting the geometric model onto the two dimensional image. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for more accurate correspondence between model features of a geometric model and measurement data features on a two-dimensional image, as recognized by Suzuki.
Although Konishi in view of Suzuki teaches generating a projection image by projecting the geometric model onto the two dimensional image (Suzuki, Col. 8, lines 35-52), they do not explicitly teach “ a setting unit configured to set an attention degree of each region of the three- dimensional model based on a region having an uneven shape in the three-dimensional model or a region in which the predetermined object overlaps another object” and “a feature value acquisition unit configured to acquire, based on the three-dimensional model or the projection image, a feature value that has been corrected in accordance with an attention degree of each region of the three-dimensional model”. However, in an analogous field of endeavor, Chui teaches an image details synthesis module may be configured to determine what 3D objects or what areas within a 3D object should be emphasized in the 2D synthesized image. For example, if there is an overlap between two objects (i.e., attention degree based on a region in which the predetermined object overlaps another object), the image details synthesis module may emphasize portions of both objects, and de-emphasize other portions of both objects such that both objects are clearly viewable on the 2D synthesized image (i.e., acquire a feature value corrected in accordance with an attention degree) (Chui, Para. [0051]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi in view of Suzuki with the teachings of Chui by including an attention degree indicating that objects overlap in the three-dimensional model and acquiring a feature value (i.e., suppressed/emphasized object) that is corrected based on the overlap regions of the three-dimensional model. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for maintaining and enhancing image characteristics in a synthesized image, as recognized by Chui. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Regarding claim 8, Konishi in view of Suzuki further in view of Chui teaches the template generation device according to claim 1, and further teaches wherein the measurement image is a range image having pixels each representing a distance to a subject (Konishi, Para. [0046], the distance image creation unit creates, using three-dimensional data acquired by the three-dimensional data acquisition unit, distance images of the object viewed from predetermined viewpoints that have been set for the object).
Regarding claim 10, Konishi in view of Suzuki further in view of Chui teaches a collation system comprising the template generation device according to claim 1 and further comprising a collation device configured to collate the template generated by the template generation device with the measurement image (Konishi, Para. [0061], the template matching unit searches for the position of the object in an input image based on a template registered in the template DB and a quantized normal direction feature amount acquired by the normal vector quantization unit, and acquires one or more collation results. Accordingly, the template matching unit performs search processing for the number of templates registered in the template DB. In an embodiment, regarding all the templates registered in the template DB, the coordinates of the object recognized in the input image and a collation score indicating the degree of matching of image features between the input image and a template for each of the coordinates are acquired as a collation result).
Claim 13 recites a method with steps corresponding to the elements of the device recited in Claim 1. Therefore, the recited steps of this claim are mapped to the proposed combination in the same manner as the corresponding elements in its corresponding system claim. Additionally, the rationale and motivation to combine the Konishi, Suzuki and Chui references, presented in rejection of Claim 1, apply to this claim.
Claim 15 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 13. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Konishi, Suzuki and Chui references, presented in rejection of Claim 1, apply to this claim. Finally, the combination of the Konishi, Suzuki and Chui references discloses a computer readable storage medium (Konishi, Para. [0040], computer-readable recording medium).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Chui et al. (US 2021/0118199 A1), as applied to claims 1, 8, 10, 13 and 15 above, and further in view of Lee et al. (US 2022/0284583 A1, with priority to PCT application No. GB2020/052014, filed August 21, 2020, U.S. Patent used herein for mapping purposes).
Regarding claim 2, Konishi in view of Suzuki further in view of Chui teaches the template generation device according to claim 1, as described above.
Although Konishi in view of Suzuki further in view of Chui teaches selecting a feature amount based on the highest accuracy of the feature amount (Suzuki, Col. 11, lines 22-46), they do not explicitly teach “wherein the processor is configured to execute the computer-executable instructions stored in the memory to further implement: a setting unit configured to set a first region having an attention degree lower than an attention degree of another region in the three-dimensional model, wherein the feature value acquisition unit acquires a feature value obtained by suppressing a feature of the first region”. However, in an analogous field of endeavor, Lee teaches attention gating to train a machine learning image segmentation model to suppress irrelevant regions in an input image and to better highlight regions of interest. The attention gates reduce the need for hard attention/external organ localization (region-of-interest) models in image segmentation frameworks (Lee, Para. [0119]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi in view of Suzuki further in view of Chui with the teachings of Lee by including attention gating for suppressing irrelevant regions (i.e., having an attention degree lower than an attention degree of another region) to better highlight regions of interest. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for attention gates that reduce the computational needs of the framework, as recognized by Lee. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Chui et al. (US 2021/0118199 A1) and Lee et al. (US 2022/0284583 A1, with priority to PCT application No. GB2020/052014, filed August 21, 2020, U.S. Patent used herein for mapping purposes), as applied to claim 2 above, and further in view of Azhar et al. (US 2020/0051277 A1).
Regarding claim 3, Konishi in view of Suzuki further in view of Chui and Lee teaches the template generation device according to claim 2, as described above.
Although Konishi in view of Suzuki further in view of Chui and Lee teaches using attention gating to better highlight regions of interest (Lee, Para. [0119]), they do not explicitly teach “wherein the setting unit sets a region designated by a user as the first region”. However, in an analogous field of endeavor, Azhar teaches a user input device to enable a user to define a region of interest on the three-dimensional model of the object (Azhar, Para. [0037]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi in view of Suzuki further in view of Chui and Lee with the teachings of Azhar by including the setting means sets a user defined region of interest as the first region. One having ordinary skill in the art would have been motivated to combine these references because doing so would enable a user to define the region of interest in the model, as recognized by Azhar. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Chui et al. (US 2021/0118199 A1) and Lee et al. (US 2022/0284583 A1, with priority to PCT application No. GB2020/052014, filed August 21, 2020, U.S. Patent used herein for mapping purposes), as applied to claim 2 above, and further in view of Fedir Poliakov (US 9,830,708 B1).
Regarding claim 4, Konishi in view of Suzuki further in view of Chui and Lee teaches the template generation device according to claim 2, as described above.
Although Konishi in view of Suzuki further in view of Chui and Lee teaches using attention gating to better highlight regions of interest (Lee, Para. [0119]), they do not explicitly teach “wherein if a proportion of a region having an identical feature in the predetermined object to the predetermined object is greater than or equal to a first value, the setting unit sets the region as the first region”. However, in an analogous field of endeavor, Poliakov teaches the segmentation module identifies a set of first pixels within the binarized image. The set of first pixels has a first value and an area greater than a predetermined percentage of the area of interest (Poliakov, Col. 10, lines 1-20).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi in view of Suzuki further in view of Chui and Lee with the teachings of Poliakov by including setting a first region (i.e., identify first set of pixels) that has an area greater than a predetermined percentage of the area of interest (i.e., a proportion of a region having an identical feature in the predetermined object to the predetermined object is greater than or equal to a first value). One having ordinary skill in the art would have been motived to combine these references because doing so would allow for determining a region in an image that takes up greater than a certain percentage of the object, as recognized by Poliakov. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Chui et al. (US 2021/0118199 A1) and Lee et al. (US 2022/0284583 A1, with priority to PCT application No. GB2020/052014, filed August 21, 2020, U.S. Patent used herein for mapping purposes), as applied to claim 2 above, and further in view of Kazuchika Iwami (US 2020/0175317 A1).
Regarding claim 5, Konishi in view of Suzuki further in view of Chui and Lee teaches the template generation device according to claim 2, as described above.
Although Konishi in view of Suzuki further in view of Chui and Lee teaches using attention gating to better highlight regions of interest (Lee, Para. [0119]), they do not explicitly teach “wherein the setting unit sets a second region having an attention degree higher than an attention degree of the first region in the three-dimensional model, and the feature value acquisition unit acquires a feature value obtained by enhancing a feature of the second region”. However, in an analogous field of endeavor, Iwami teaches setting a region of interest (i.e., region having an attention degree higher than an attention degree of a first region) on a tablet. For example, the determining unit sets the regions of interest in the images of the tablet medicines (Iwami, Para. [0110]). Iwami further teaches applying a process of enhancing the printed character portion of the tablet to the tablet areas detected (Iwami, Para. [0082]). The printed character portion is appropriately extracted as a portion having relatively high luminance (i.e., feature value) (Iwami, Para. [0096]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi in view of Suzuki further in view of Chui and Lee with the teachings of Iwami by including setting a region of interest (i.e., region having a higher attention degree) and enhancing the region to extract a feature value. One having ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to combine these references, because doing so would allow for improving collation accuracy, as recognized by Iwami. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Chui et al. (US 2021/0118199 A1) and Lee et al. (US 2022/0284583 A1, with priority to PCT application No. GB2020/052014, filed August 21, 2020, U.S. Patent used herein for mapping purposes) and Kazuchika Iwami (US 2020/0175317 A1), as applied to claim 5 above, and further in view of Mayster et al. (US 2022/0205809 A1, with priority to PCT Application No. PCT/US2019/028734, US PGPub used herein for mapping purposes).
Regarding claim 6, Konishi in view of Suzuki further in view of Chui, Lee and Iwami teaches the template generation device according to claim 5, as described above.
Although Konishi in view of Suzuki further in view of Chui, Lee and Iwami teaches setting a region of interest (Iwami, Para. [0110]), they do not explicitly teach “wherein the setting unit sets a region having an uneven shape as the second region”. However, in an analogous field of endeavor, Mayster teaches determining one or more irregular surfaces based on image and sensor data (Mayster, Para. [0148]) and teaches the irregular surface is determined when one or more portions of the one or more surfaces have a shape that satisfies on or more irregular shape criteria associated with irregularity in a length of sides of the one or more surfaces and/or irregularity in angles of the one or more surfaces (Mayster, Para. [0152]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi in view of Suzuki further in view of Chui, Lee and Iwami with the teachings of Mayster by including determining an irregular surface when one or more irregular shape criteria are met (i.e., setting second region when region has uneven shape). One having ordinary skill in the art before the effective filing date would have been motivated to combine these references because doing so would allow for performing image processing on detecting irregular surfaces, as recognized by Mayster. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Chui et al. (US 2021/0118199 A1) and Lee et al. (US 2022/0284583 A1, with priority to PCT application No. GB2020/052014, filed August 21, 2020, U.S. Patent used herein for mapping purposes) and Kazuchika Iwami (US 2020/0175317 A1), as applied to claim 5 above, and further in view of Fedir Poliakov (US 9,830,708 B1).
Regarding claim 7, Konishi in view of Suzuki further in view of Chui, Lee and Iwami teaches the template generation device according to claim 5, as described above.
Although Konishi in view of Suzuki further in view of Chui, Lee and Iwami teaches setting a region of interest (Iwami, Para. [0110]), they do not explicitly teach “wherein if a proportion of a region having an identical feature in the predetermined object to the predetermined object is smaller than or equal to a second value, the setting unit sets the region as the second region”. However, in an analogous field of endeavor, Poliakov teaches a second set of pixels represents a noise region with a second value, different from the first value. The segmentation module determines the noise region based on the area of the noise region being smaller than a predetermined percentage of the area of interest. For example, the segmentation module may determine the area represented by a potential noise region and compare that area to a total area of the binarized image (e.g., the binarized version of the area of interest). In some embodiments, where the area of the potential noise region is below a predetermined threshold, the area is determined to represent a noise region (Poliakov, Col. 10, lines 1-20).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi in view of Suzuki further in view of Chui, Lee and Iwami with the teachings of Poliakov by including setting a noise region (i.e., second region) when the area of the region in the total area of the image (i.e., predetermined object) is smaller than a threshold value. One having ordinary skill in the art would have been motived to combine these references because doing so would allow for determining a region in an image that takes up less than a certain percentage of the object, as recognized by Poliakov. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Chui et al. (US 2021/0118199 A1), as applied to claims 1, 8, 10, 13 and 15 above, and further in view of Robert et al. (US 2014/0321710 A1).
Regarding claim 9, Konishi in view of Suzuki further in view of Chui teaches the template generation device according to claim 1, as described above.
Although Konishi in view of Suzuki further in view of Chui teaches determining a projection image (Suzuki, Col. 8, lines 35-52), they do not explicitly teach “wherein the first generation unit generates a plurality of the projection images each representing the predetermined object in a different orientation as a two-dimensional image, and the second generation unit generates the template for each of the plurality of projection images”. However, in an analogous field of endeavor, Robert teaches comparing images acquired with template images that depict the object as a two-dimensional synthesized projection image of the object at different positions and orientations (Robert, Para. [0007]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Konishi in view of Suzuki further in view of Chui with the teachings of Robert by including generating templates from two-dimensional synthesized projection images of the object at different positions and orientations. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for generating templates for collation using projection images of objects at different orientations, as recognized by Robert. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Claims 11, 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshinori Konishi (US 2019/0095749 A1, published March 28, 2019) in view of Suzuki et al. (US 9,914,222 B2) further in view of Okonogi et al. (US 2022/0327793 A1)
Regarding claim 11, Konishi in view of Suzuki teaches the collation system according to claim 10, wherein the collation device includes:
a memory configured to store computer-executable instructions (Konishi, Para. [0056], processing of each constituent element is realized by the CPU 110 reading and executing a program stored in the hard disk 114 or the memory card 14); and
a processor configured to execute the computer-executable instructions stored in the memory (Konishi, Para. [0056], processing of each constituent element is realized by the CPU 110 reading and executing a program stored in the hard disk 114 or the memory card 14) to implement:
an estimation unit configured to estimate a position-orientation of the predetermined object based on the measurement image (Suzuki, Col. 7, lines 49-63, the prediction value calculation unit obtains approximate values of the position and orientation of all of the model edge features of the geometric model).
The proposed combination as well as the motivation for combining the Konishi and Suzuki references presented in the rejection of Claim 1, apply to Claim 11 and are incorporated herein by reference.
Although Konishi in view of Suzuki teaches determining a position and orientation of the object (Suzuki, Col. 7, lines 49-63), they do not explicitly teach “a determination unit configured to determine, based on the position-orientation of the predetermined object estimated by the estimation means, an overlapping region in which the predetermined object overlaps another object in the measurement image and “an updating unit configured to update the template so as to suppress a feature of a region in the template, the region corresponding to the overlapping region”. However, in an analogous field of endeavor, Okonogi teaches a determination unit identifies the position and range of an overlap area on the reference projection plan. Through this determination process, the determination unit determines whether the object B is including in the overlap area D in the projected image (Okonogi, Para. [0085]). Okonogi further teaches the image of the object B including on the overlap area is suppressed (Okonogi, Para. [0152]).
Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date to modify the device of Konishi in view of Suzuki with the teachings of Okonogi by including determining an overlapping region in which the object overlaps another object and suppressing a feature of the overlapping region. One having ordinary skill in the art would have been motivated to combine these references because doing so would allow for performing an adjustment process on an overlap area, as recognized by Okonogi. Thus, the claimed invention would have been obvious to one having ordinary skill in the art before the effective filing date.
Regarding claim 14, Konishi teaches a collation method comprising:
acquiring a measurement image representing a result of measuring a measurement range including a predetermined object (Konishi, Para. [0061], the template matching unit searches for the position of the object in an input image (i.e., measurement image) based on a template registered in the template DB and a quantized normal direction feature amount acquired by the normal quantization unit, and acquires one or more collation results), and a template indicating a feature value of the predetermined object (Konishi, Para. [0054], the template creation unit creates a template for each viewpoint based on a quantized normal direction feature amount);
collating the template updated in the updating step with the measurement image (Konishi, Para. [0061], the template matching unit searches for the position of the object in an input image (i.e., measurement image) based on a template registered in the template DB and a quantized normal direction feature amount acquired by the normal quantization unit, and acquires one or more collation results).
Although Konishi teaches creating a distance image of an object viewed from predetermined viewpoints that have been set for the object, using three-dimensional data acquired by the three-dimensional data acquisition unit (Konishi, Para. [0066]) and calculating normal vectors of feature points of the object viewed from the predetermined viewpoint that has been set for the object, based on the distance images of the respective viewpoints (Konishi, Para. [0067]), Konishi does not explicitly teach “estimating a position-orientation of the predetermined object based on the measurement image”. However, in an analogous field of endeavor, Suzuki teaches the prediction value calculation unit obtains approximate values of the position and orientation of all of the model edge features of the geometric model (Suzuki, Col. 7, lines 49-63).
The proposed combination as well as the motivation for combining the Konishi and Suzuki references presented in the rejection of Claim 1, apply to Claim 11 and are incorporated herein by reference.
Although Konishi in view of Suzuki teaches determining a position and orientation of the object (Suzuki, Col. 7, lines 49-63), they do not explicitly teach “determining, based on the position-orientation of the predetermined object estimated in the estimation step, an overlapping region in which the predetermined object overlaps another object in the measurement image” and “updating the template so as to suppress a feature of a region in the template, the region corresponding to the overlapping region”. However, in an analogous field of endeavor, Okonogi teaches a determination unit identifies the position and range of an overlap area on the reference projection plan. Through this determination process, the determination unit determines whether the object B is including in the overlap area D in the projected image (Okonogi, Para. [0085]). Okonogi further teaches the image of the object B including on the overlap area is suppressed (Okonogi, Para. [0152]).
The proposed combination as well as the motivation for combining the Konishi, Suzuki and Okonogi references presented in the rejection of Claim 11, apply to Claim 14 and are incorporated herein by reference. Thus, the method recited in Claim 14 is met by Konishi in view Suzuki further in view of Okonogi.
Claim 16 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 14. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Konishi, Suzuki and Okonogi references, presented in rejection of Claim 11, apply to this claim. Finally, the combination of the Konishi, Suzuki and Okonogi references discloses a computer readable storage medium (Konishi, Para. [0040], computer-readable recording medium).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Emma Rose Goebel whose telephone number is (703)756-5582. The examiner can normally be reached Monday - Friday 7:30-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Emma Rose Goebel/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662