DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description:
The “display” 2410 (discussed in paragraph [0161]) is not shown on Fig. 20.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
In paragraph [0073], reference number ‘218’ should read ‘216’. Reference number ‘218’ refers to a fixing element, while the specification refers to the connector ‘216’.
In paragraph [0074], delete the extra indent before “3.”.
Remove the empty paragraph [0093], and renumber following paragraphs accordingly.
Appropriate correction is required.
Claim Objections
Claim 19 objected to because of the following informalities:
The limitation of “the reference object” in line 8 lacks antecedent basis.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
An “inspection data interface” in Claim 19
A “reference data interface” in Claim 19
A “part recognizer” in Claim 19
“Registration logic” in Claim 19
An “error checker” in Claim 19
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
The “inspection data interface” and “reference data interface” are being interpreted as data buses. The support for this interpretation can be found in paragraph [0153] of the instant application.
The “registration logic” is being interpreted as software which contains an algorithm for registering images. The support for this interpretation can be found in [0165] of the instant application.
The “part recognizer” and “error checker” are being interpreted as neural network modules. The support for this interpretation in paragraph [0028], [0098], and [0099] of the instant application.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental processes (concepts performed in a human mind, including as an observation, evaluation, judgment, opinion, organizing human activity and/or mathematical concepts and calculations).
The independent claim(s) 1 and 19 recite(s) a method and system for comparing images in order to find an error. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory).
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that the independent claims 1 and 19 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories?
YES. Claims 1 and 19 are directed to a method and machine vision system for inspecting an inspection object, respectively.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?
YES, the claims are directed toward a mental processes (i.e. abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
The method in Claim 1 and the machine vision system of Claim 19 comprise mental processes that can be practicably performed in the human mind (or generic computers or components configured to perform the method), and therefore an abstract idea.
Regarding Claim 1: the method recites the steps of :
comparing an inspection image of an inspection object to a reference image of a reference object (comparing two images can be performed in the human mind as an observation);
recognizing at least one inspection part in the inspection image and at least one reference part in the reference image (recognizing the same part in both images be performed in the human mind as an observation);
registering the inspection image onto the reference image using the at least one inspection part and the at least one reference part (registering two images (or overlapping/comparing images) can be performed in the human mind as an observation);
and checking for at least one error using the inspection image, the reference image, and the set of registration data (checking for an error by comparing images can be performed in the human mind as an observation).
Regarding Claim 19: the system recites the functions of :
configured to compare the inspection image of the inspection object to the reference image of the reference object (comparing two images can be performed in the human mind as an observation);
recognize at least one inspection part in the inspection image and at least one reference part in the reference image (recognizing the same part in both images be performed in the human mind as an observation);
register the inspection image onto the reference image using the at least one inspection part and the at least one reference part (registering two images can be performed in the human mind as an observation) ;
check for at least one error using the inspection image, the reference image, and the set of registration data (checking for an error by comparing images can be performed in the human mind as an observation).
These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
As such, a person could mentally observe two images, determine corresponding parts within the images, register the images, and then determine an error. The mere nominal recitation that the various steps are being executed by a processor does not take the limitations out of the mental process and/or mathematical concepts groupings. Thus, the claims recite a mental process.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claims 1 and 19 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application.
Claim 1 recites the further limitations of:
providing a set of registration data, (insignificant post-solution extra activity of generating data);
Claim 19 recites the further limitations of:
an inspection data interface (generic computer components) ;
a reference data interface (generic computer components) ;
at least one processor (generic computer components) ;
a part recognizer (a neural network, i.e., generic computer components) ;
registration logic configured to….provide a set of registration data (generic computer components, i.e. program or algorithm and insignificant post-solution extra activity of generating data) ;
an error checker. (a neural network, i.e., generic computer components).
These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
NO, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Claims 1 and 19 do not recite any additional elements that are not well-understood, routine or conventional. The use of a generic computer elements (such as neural networks and processors) are routine, well-understood and conventional process that is performed by computers.
Thus, since Claims 1 and 19 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear Claims 1-19 are not eligible subject matter under 35 U.S.C 101.
Regarding Claim 2: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “a neural network” is a generic computer component.
Regarding Claim 3: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “performing a homography estimation based on at least one reference base point derived from the at least one reference part and at least one inspection base point derived from the at least one inspection part” falls into the mathematical concepts grouping of abstract ideas.
Regarding Claims 4 and 20: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “wherein the at least one error comprises at least one of an incorrect part error, a part orientation error, an alignment error, a fixing element error, or a measurement error” simply expands on the types of errors that can be identified. The type of error can be identified through the mental process of observation.
Regarding Claim 5: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) the inspection object is or corresponds to a composite construction object, the composite construction object comprises a plurality of inspection construction parts, the reference object is or corresponds to a reference composite construction object, and the reference composite construction object comprises a plurality of reference construction parts” simply expands on the types of objects that are compared.
Regarding Claim 6: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “wherein the reference image comprises at least one of BIM data, CAD data, or a set of construction parts data” simply expands on the type of image of the reference image.
Regarding Claim 7: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “defining a focus region based on the at least one reference part, and comparing the at least one inspection part being inside the focus region to the at least one reference part” falls into the mental processes grouping of abstract ideas. A person can reasonably identify a reference part, define a focus region by drawing a box around the reference part with the assistance of pen and paper, and then compare the object within the box to another object through observation.
Regarding Claim 8: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “checking whether the at least one inspection part inside the focus region has at least one of an incorrect part error or a part orientation error” falls into the mental processes grouping of abstract ideas. A person can reasonably identify a reference part within a focus region and determine if the part is incorrect or oriented incorrectly through simple observation.
Regarding Claim 9: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “searching for presence of at least one characteristic property, wherein the at least one characteristic property includes a characteristic corresponding to a class of construction objects which a reference composite construction object belongs to” falls into the mental processes grouping of abstract ideas. A person can reasonably observe an image and identify certain construction parts by looking for a characteristic property
Regarding Claim 10: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “wherein the at least one characteristic property is or at least comprises a horizontal or at least essentially horizontal construction part” simply expands on the type of characteristic properties. A person can reasonably identify whether a part is horizontal through simple observation.
Regarding Claim 11: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “computing a scaling factor for an element having known dimensions, and searching for the element having the known dimensions in the inspection image based on the scaling factor” falls into the mathematical concepts grouping of abstract ideas. A person can reasonably compute a scaling factor using the assistance of pen and paper.
Regarding Claim 12: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “wherein the element having the known dimensions is or at least comprises at least one of a fiducial, a mark, or a tag” falls into the mental processes grouping of abstract ideas. A person can reasonably observe an image and determine if an element contains a tag.
Regarding Claim 13: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “analyzing the focus region using a neural network” recites a generic computer component.
Regarding Claim 14: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “wherein checking for the at least one error further comprises searching for at least one fixing element within the focus region” falls into the mental processes grouping of abstract ideas. A person can reasonably observe an image and determine if a fixing element is present within a certain region of an image.
Regarding Claim 15: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation(s) “wherein checking for the at least one error comprises counting fixing elements within the focus region” falls into the mental processes grouping of abstract ideas. A person can reasonably observe an image and count the number of fixing elements present within a certain region of an image.
Regarding Claim 16-18: the additional limitation(s) “presenting an overlaid image containing at least an area of the inspection image and at least an area of the reference image” are NOT directed toward an abstract idea since it recites additional elements that integrate the judicial exception into a practical application and add significantly more that the judicial exception. Therefore, claim(s) 16-18 are not directed to an abstract idea and therefore are not rejected under 35 USC 101.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-20 of U.S. Patent No. 11,978,197.
Although the claims at issue are not identical, they are not patentably distinct from each other because Claims 1-20 of U.S. Patent No. 11,978,197 anticipate the claims of the instant application. Claim 1 of the U.S. patent recites “wherein the inspection part and the reference part correspond to each other”. However, Claim 1 of the instant application recites “wherein the at least one inspection part and the at least one reference part correspond to each other”. Thus, the instant application is merely broadening the scope of the initial claim so that multiple parts of the inspection image and the reference image correspond to each other.
Claims 2, 3, 7, and 19 are similarly altered in the same way. The U.S. Patent recites “the inspection part” and the “the reference part”, while the instant application recites” the at least one inspection part” and “the at least one reference part.
Claim 9 of the U.S. Patent recites “searching for presence of at least one characteristic property”. However, the instant application recites “searching for presence of at least one characteristic property, wherein the at least one characteristic property”. Thus, the application is merely broadening the scope of the initial claim so that multiple characteristic properties can be searched. Claim 10 has similarly been broadened to state “at least one characteristic property” instead “the characteristic property”.
All other limitations from Claims 1-20 of the instant application correspond to Claims 1-20 of the U.S. Patent. Thus, the instant application is not patentably distinct from U.S. Patent No. 11,978,197.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhao et al. (Zhao, Gang, et al., “A mask R-CNN based method for inspecting cable brackets in aircraft”, 2020), hereinafter Zhao.
As to Claim 1, Zhao teaches an inspection method, comprising (see pg. 2, lines 36-38, “In this work, a semi-automatic assembly states inspection method for aircraft cable brackets is proposed”)
comparing an inspection image of an inspection object to a reference image of a reference object (see page 2, lines 49-51, “inspection for brackets is 49 executed by automatically comparing the shapes of brackets 50 in the target image with the shapes of corresponding brackets in the standard image”)
recognizing at least one inspection part in the inspection image and at least one reference part in the reference image (see pg. 2, lines 43-45, “bracket recognizer based on Mask R-CNN is trained and applied to segment 45 brackets from image to be inspected”),
wherein the at least one inspection part and the at least one reference part correspond to each other (see Fig 5, target image to be inspected, and standard global image, and see how cable brackets in both images correspond to each other);
registering the inspection image onto the reference image using the at least one inspection part and the at least one reference part (see pg. 2, lines 45-4, “thirdly, image registration between the target image and the standard global image is conducted to get the corresponding standard partial image”),
and providing a set of registration data (see page 10, Fig. 11 Image registration result (b), where the images are registration data)
and checking for at least one error using the inspection image, the reference image, and the set of registration data (see page 7, lines 320-324, “After getting the inspection result, visualization module is 320 adopted for the convenience of inspector. For every bracket, 321 it will be labeled with various colors, such as green color for 322 correct bracket, red color for missed bracket, and yellow color 323 for incorrect bracket”, and see Fig.8, where missing an incorrect brackets are shown).
As to Claim 2, Zhao teaches the inspection method according to claim 1, wherein at least one of the at least one inspection part or the at least one reference part is recognized using a neural network (see pg. 2, lines 43-45, “bracket recognizer 44 based on Mask R-CNN is trained and applied to segment 45 brackets from image to be inspected”, where CNN stands for convolutional neural network).
As to Claim 3, Zhao teaches the inspection method according to claim 1 one of the preceding claims, wherein the registering comprises: performing a homography estimation based on at least one reference base point derived from the at least one reference part and at least one inspection base point derived from the at least one inspection part (see pg. 2, lines 214-217, “Secondly, according to the multi-scale template matching 213 result, the projection position of the target image’s four corner 214 points in the standard global image can be obtained Then, four pairs of corner points are used to calculate homography matrix H1 between target image and standard global image as follows”.
As to Claim 4, Zhao teaches that the at least one error comprises at least one of an incorrect part error but fails to teach a part orientation error, an alignment error, a fixing element error, or a measurement error (see page 7, lines 320-324, “After getting the inspection result, visualization module is 320 adopted for the convenience of inspector. For every bracket, 321 it will be labeled with various colors, such as green color for 322 correct bracket, red color for missed bracket, and yellow color 323 for incorrect bracket”, and see Fig.8, where missing an incorrect brackets are shown) .
As to Claim 5, Zhao in view of X teaches the inspection object is or corresponds to a composite construction object (see Fig. 12, aircraft wall),
the composite construction object comprises a plurality of inspection construction parts (see Fig. 12, aircraft wall with multiple cable brackets, where each bracket is a construction part),
the reference object is or corresponds to a reference composite construction object (see Fig. 7, standard global image, model aircraft wall)
and the reference composite construction object comprises a plurality of reference construction parts. (see Fig. 7, standard global image of aircraft wall with several cable brackets)
As to Claim 6, Zhao teaches the inspection method according to claim 1 ,wherein the reference image comprises at least one of BIM data, CAD data, or a set of construction parts data (see pg.4, Fig.3(a), Rendered standard global RGB image (a), and see pg. 2, lines 112-114, “In virtue of simulation technology based on Open 112 Scene Graph (OSG), a platform is developed to automatically generate synthetic realistic images with pixel-level annotations based on 3D digital model”, where the 3D digital model is the construction parts data).
As to Claim 9, Zhao teaches the inspection method according to Claim 1, wherein checking for the at least one error comprises: searching for presence of at least one characteristic property, wherein the at least one characteristic property includes a characteristic corresponding to a class of construction objects which a reference composite construction object belongs to. (see page 13, lines 153-158, “In order to obtain the accurate instance segmentation results, a brackets recognizer based on Mask R-CNN is trained. The framework of brackets recognizer based on Mask R-CNN is illustrated in Fig. 4. It consists mainly of two parts: backbone for feature extraction and head for object detection (location and classification) and mask prediction” and see page 13, lines 38 through 50, “then, assembly states inspection for brackets is 49 executed by automatically comparing the shapes of brackets 50 in the target image with the shapes of corresponding brackets”, where the characteristic property is the shape corresponding to a bracket, and the bracket is a class of construction objects).
As to Claim 10, Zhao teaches that the at least one characteristic property is or at least comprises a horizontal or at least essentially horizontal construction part (see Fig 2(a), the shape of some cable brackets are essentially horizontal).
As to Claim 16, Zhao teaches presenting an overlaid image containing at least an area of the inspection image and at least an area of the reference image (se Fig. 5, Brackets inspection results, and see Fig 8, Examples of bracket matching result).
As to Claim 17, The inspection method according to claim 16 one of the preceding claims, wherein the overlaid image comprises at least one error-marking label (see page 7, lines 320-324, “After getting the inspection result, visualization module is 320 adopted for the convenience of inspector. For every bracket, 321 it will be labeled with various colors, such as green color for 322 correct bracket, red color for missed bracket, and yellow color 323 for incorrect bracket”, and see Fig. 8, red marks for Incorrect bracket and missing bracket).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 7-8, 13-15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over by Zhao et al. (Zhao, Gang et al., “A mask R-CNN based method for inspecting cable brackets in aircraft”, 2020) in view of Uchida (US Pub No 2019/0333204), hereinafter Uchida.
As to Claim 7, Zhao fails to teach that checking for the at least one error comprises: defining a focus region based on the at least one reference part, and comparing the at least one inspection part being inside the focus region to the at least one reference part. However, Uchida teaches an image processor for inspection (see abstract) where a “inspection target area” can be determined from an image (see paragraph [0027], “The inspection target selection unit 112 includes an inspection target candidate selection unit that extracts inspection target candidates from the captured images 109”) , that the target area may be extracted based on a reference object (see paragraph [0057], “In S603, the inspection target area may be extracted based on the position relative to the reference object previously defined”) and that the inspection part inside this target area can be compared to a “correct answer image”, (see paragraph [0035], “The correct answer image 209 corresponds to the inspection contents and, for example, is an image in a state where the screw has been correctly fastened. In the image inspection, it can be determined whether the assembly has been correctly performed, through comparison of the image of the inspection target area 208 and the correct answer image 209”, and see Fig 2C where “target area” is identified, and compared to “correct answer image” 209). Uchida is combinable with Zhao as both are from the analogous field of image analysis for inspection. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inspection target area taught by Uchida with the inspection method taught by Zhao. The motivation for doing so would be to determine reduce the time needed to inspect an assembled product. Uchida teaches in paragraph [0071], “it is possible to appropriately determine the inspection timing and the inspection target area of the assembled product based on the assembling work in the factory. This eliminates interrupting the work only for inspection of the assembled product”. Thus, it would have been obvious to combine the target area taught by Uchida with the method taught by Zhao in order to obtain the invention as claimed in Claim 7.
As to Claim 8, Zhao teaches checking whether the at least one inspection part has at least one of an incorrect part error or a part orientation error.(see page 13, lines 321-324, “For every bracket, 321 it will be labeled with various colors, such as green color for 322 correct bracket, red color for missed bracket, and yellow color 323 for incorrect bracket”). Zhao fails to teach that the incorrect part is found within a focus region. However, Uchida teaches that an incorrect fastened part can be identified within a target region (see paragraph [0036], “An image of an inspection target area 210 is an image in the state where the screw has been correctly fastened, and an image of an inspection target area 211 is an image in a state where one of two screws has not been correctly fastened. When the image of the inspection target area 211 is inspected, it is determined that the screw has not been correctly fastened, and a warning is notified to the worker”, where the screw is an inspection part). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inspection target area taught by Uchida with the inspection method taught by Zhao. The motivation for doing so would be to determine reduce the time needed to inspect a target area of an assembled as taught by Uchida in paragraph [0071]. Thus, it would have been obvious to combine the target area taught by Uchida with the method taught by Zhao in order to obtain the invention as claimed in Claim 8.
As to Claim 13, Zhao in view of Uchida teaches analyzing the focus region using a neural network (see paragraph [0092] of Uchida, “The processing by the work determination unit, the image inspection unit, etc. among the above-described processing units may be performed by a learnt model that has performed machine learning, in place of the processing units… The learnt model can be configured by, for example, a neural network model”).
As to Claim 14, Zhao fails to teach checking for the at least one error further comprises searching for at least one fixing element within the focus region. However, Uchida teaches that a screw and (which is a fixing element) can be identified within a focus region (see paragraph [0036], “ An image of an inspection target area 210 is an image in the state where the screw has been correctly fastened, and an image of an inspection target area 211 is an image in a state where one of two screws has not been correctly fastened. When the image of the inspection target area 211 is inspected, it is determined that the screw has not been correctly fastened”). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inspection target area taught by Uchida with the inspection method taught by Zhao. The motivation for doing so would be to determine reduce the time needed to inspect a target area of an assembled as taught by Uchida in paragraph [0071]. Thus, it would have been obvious to combine the target area taught by Uchida with the method taught by Zhao in order to obtain the invention as claimed in Claim 14.
As to Claim 15, Zhao teaches that inspection construction parts can be counted (see pg. 8, “Nt is the number of brackets in the target image”, but fails to teach that counting is done for fastening elements within a focus region. However, Uchida teaches that errors regarding fastening elements within a focus region can be identified (see paragraph [0036], “An image of an inspection target area 210 is an image in the state where the screw has been correctly fastened, and an image of an inspection target area 211 is an image in a state where one of two screws has not been correctly fastened. When the image of the inspection target area 211 is inspected, it is determined that the screw has not been correctly fastened”, where the inspection target area is the focus region, and the fastening element is a screw). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inspection target area taught by Uchida with the counting method taught by Zhao. The motivation for doing so would be to determine reduce the time needed to inspect a target area of an assembled as taught by Uchida in paragraph [0071]. Thus, it would have been obvious to combine the target area taught by Uchida with the method taught by Zhao in order to obtain the invention as claimed in Claim 8.
As to Claim 19, Zhao teaches a machine vision system (see page 2, lines 54-59, “The proposed method, including dataset generation platform, Mask R-CNN model training and bracket inspection pipeline are described in 57 Section 4. Then, a prototype system based on client–server framework is depicted in Section 5”, where the R-CNN is a machine vision tool),
configured to inspect an inspection object comprising a plurality of inspection parts (see Fig.12, image of aircraft wall with several cable brackets), the machine vision system comprising:
acquiring an inspection image of the inspection object and acquiring a reference image and comparing the inspection image of the inspection object to the reference image of the reference object (see page 2, lines 49-51, “inspection for brackets is executed by automatically comparing the shapes of brackets in the target image with the shapes of corresponding brackets in the standard image”);
a part recognizer configured to recognize at least one inspection part in the inspection image and at least one reference part in the reference image (see page 2, lines 43-45, “a bracket recognizer based on Mask R-CNN is trained and applied to segment brackets from image to be inspected (target image)”, where the Mask R-CNN is the part recognizer),
registration logic configured to register the inspection image onto the reference image using the at least one inspection part and the at least one reference part and to provide a set of registration data (see page 4, lines 233-236, “Then, Iterative Closest Point 233 (ICP) algorithm is used for fine registration to get the perspective transformation H2 between target image and intermediate standard partial image.”, where the algorithm is the registration logic),
and checking for at least one error using the inspection image, the reference image, and the set of registration data (see page 7, lines 320-324, “After getting the inspection result, visualization module is 320 adopted for the convenience of inspector. For every bracket, 321 it will be labeled with various colors, such as green color for 322 correct bracket, red color for missed bracket, and yellow color 323 for incorrect bracket”).
Zhao fails to explicitly teach an inspection data and reference data interfaces, and an error checker. To check for errors, an algorithm is used instead of a neural network (see pg. 5, lines 259-271).
However, Uchida teaches an input unit (see Fig. 1A, image management unit 108), which can obtain inspection image and reference image (see paragraph [0027], “An image management unit 108 has a function of temporally buffering images 109 captured by the imaging devices”),
which is connected to a processor (see Fig 1A, CPU 101), which is configured to inspect images (see paragraph [0028], “An image inspection unit 115 performs the image inspection on the images selected as the inspection target selection result 114”, and see paragraph [0029], “The processing corresponding to each operation in the flowcharts described in the exemplary embodiments herein may be achieved by software with use of a CPU”)
and a neural network that can be used to inspect images (see paragraph [0092], “The processing by the work determination unit, the image inspection unit, etc. among the above-described processing units may be performed by a learnt model that has performed machine learning… The learnt model can be configured by, for example, a neural network model”.
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the method taught by Zhao with the hardware and network taught by Uchida. The motivation for doing so would be to determine reduce the time needed to inspect an assembled product. Uchida teaches in paragraph [0071], “it is possible to appropriately determine the inspection timing and the inspection target area of the assembled product based on the assembling work in the factory. This eliminates interrupting the work only for inspection of the assembled product”. Thus, it would have been obvious to combine teachings of Uchida with the method taught by Zhao in order to obtain the invention as claimed in Claim 19.
As to Claim 20, Zhao in view of Uchida teaches the at least one error comprises at least one of an incorrect part error but fails to teach a part orientation error, an alignment error, a fixing element error, or a measurement error (see Uchida, page 7, lines 320-324, “After getting the inspection result, visualization module is 320 adopted for the convenience of inspector. For every bracket, 321 it will be labeled with various colors, such as green color for 322 correct bracket, red color for missed bracket, and yellow color 323 for incorrect bracket”) .
Claims 11- 12 are rejected under 35 U.S.C. 103 as being unpatentable over by Zhao et al. (Zhao, Gang et al., “A mask R-CNN based method for inspecting cable brackets in aircraft”, 2020) in view of Daisuke et al. (JP 2002236100), hereinafter Daisuke.
As to Claim 11, Zhao teaches that a scaling factor may be computed (“Then, template matching between integrated target mask and integrated standard global mask with different scale is used to determine the scale and position of target image”). However, Zhao fails to teach explicitly that the scaling factor is computed with respect to an element having known dimension. Zhao also fails to explicitly searching for the element having the known dimensions in the inspection image based on the scaling factor. However, Daisuke teaches elements with known dimensions can be used to determine magnification (see paragraph [0007], “When the inspection object is imaged, rectangles, circles or straight lines whose dimensions are already known or marks in a shape (a '+' shape, an L-shape, a 'cross +' shape) as a combination of straight lines area arranged inside the same screen, so as to be fetched simultaneously. By using the marks arranged at equal intervals inside the image, the correction processing operation for magnification, position, inclination or the like is performed”, where the marks correspond to ), and searching for the element having known dimensions (see paragraph [0032], “On the monitor 72, the position of the digital camera system 200 is adjusted so that A1 and B1 and A2 and B2 substantially match, and an image of the inspection target is picked up and an image is captured, where marks B1 and B2 are marks corresponding to the element. Daisuke is combinable with Zhao because both are from the analogous field of image analysis for inspection. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the magnification corresponding to an element taught by Daisuke with the scaling and method taught by Zhao. The motivation for doing so would be to correct scaling issues caused by movement of the imaging device. Daisuke teaches in paragraph [0004], “When an image pickup device such as a TV camera is moved to an inspection location to input an image, there is a problem in that the magnification and the tilt of the image differ for each image to be picked up.” Thus, it would have been obvious to one of ordinary skill to combine the teachings of Daisuke with the teachings of Zhao in order to obtain the inventio as claimed in Claim 11.
As to Claim 12, Zhao fails to explicitly teach that the element having the known dimensions is or at least comprises at least one of a fiducial, a mark, or a tag. However, Daisuke teaches that a mark may be given to an element of known dimension (see paragraph [0007], “When the inspection object is imaged, rectangles, circles or straight lines whose dimensions are already known or marks in a shape (a '+' shape, an L-shape, a 'cross +' shape) as a combination of straight lines area arranged inside the same screen, so as to be fetched simultaneously’). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed inventio to combine the fiducial marks taught by Daisuke with the teachings of Zhao. The motivation for doing so would be to correct scaling issues caused by movement of the imaging device as taught by Daisuke in paragraph [0004].
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over by Zhao et al. (Zhao, Gang et al., “A mask R-CNN based method for inspecting cable brackets in aircraft”, 2020) in view of in view of Daisuke et al. (JP2002236100), and further in view of Abidi et al. (Abidi, Besma R. et al., “Operator Assisted Threat Assessment for Carry-On luggage Inspection”, 2001), hereinafter Abidi.
As to Claim 18, Zhao in view Daisuke fails to teach modifying a visibility of at least one of the areas of the inspection image or the reference image based on information received from a sliding button. However, Abidi teaches a system for inspecting carrying luggage, which allows the user to modify the visibility of the inspection image of the carry-on (see Fig, 6, images of luggage scene with sliding bar, and see caption for Figure 6., “Sliding intensity bar allows for manual manipulation of the luggage scene by the screener….This is a case of image enhancement by intensity selection”, where the user enhanced the visibility by modifying the intensity of the image). Abidi is combinable with Zhao and Daisuke as all three are from the analogous field of image analysis for inspecting objects. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the sliding intensity bar taught by Abidi with the teachings of Zhao and Daisuke. The motivation for doing so would be to allow the user to more easily interpret images. Abidi teaches in on page 5, paragraph 2, “The focus is on displaying raw and gradually processed data by several methods deemed to be helpful towards data interpretation and decision making. At various levels of processing, user interfaces, displays, and visualization models were designed in an effort to ease the luggage, X-ray image interpretation task.”). Thus, it would have been obvious to one of ordinary skill to combine the sliding bar taught by Abidi with teachings of Zhao and Daisuke in order to obtain the invention as claimed in Claim 18.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOUMYA THOMAS whose telephone number is (571)272-8639. The examiner can normally be reached M-F 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.T./ Examiner, Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664