DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the reference character 201 in FIG. 2A which is not mentioned in the description.
The drawings are objected to as failing to comply with 37 CFR 1.84(q)(5) because the lead line for reference character 531 in FIG. 5 does not extend to the feature indicated, i.e., “non-image requirement comparer(s)” and the text of that feature is off-centered and should be positioned in the center of its respective box.
Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
“and/or or” should be changed to “and/or” in line 5 on page 34.
“and/or or” should be changed to “and/or” in lines 1-2 on page 35.
“ad/or” should be changed to “and/or” in line 31 on page 54.
Appropriate correction is required.
Claim Interpretation
According to the Federal Circuit’s decision in SuperGuide v. DirecTV, claim language of the type “at least one of … and …” creates a presumption that applicant intended the plain and ordinary meaning of the claim language to be a conjunctive list, unless the specification supports an interpretation of the claim language that rebuts the presumption.1
Claim 66 recites limitations that raise the presumption of a conjunctive list per SuperGuide:
[Claim 66] The method of claim 65, wherein the imaging parameters define configurations of pre-configured visual inspection resources that include at least one of: one or more fixed-position cameras, and one or more cameras preconfigured to move along a predefined path.
FIG. 3A shows a system comprising both fixed and moveable cameras. FIG. 3B shows a multi-axis robot with a moveable camera. A combination of fixed and moveable cameras is described in lines 1-8 of page 50 of the specification. Lines 4-8 of page 54 of the specification describe fixed and moveable cameras as alternatives. Accordingly, the specification indicates that Applicant intended claim 65 to include a disjunctive list and that the presumption of the plain meaning of the list being conjunctive is rebutted.
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitation uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation is:
“an instruction executing unit ... to: access: one or more imaging requirements ... and imaging parameters ... determine correspondences ... estimate, using the correspondences, fulfillment of the imaging requirements ... and generate a visual inspection plan ... using the pre-configured visual inspection resources according to their respective pre-configurations” in claim 71.
Even though the preamble of claim 71 recites “a memory which instructs the instruction executing unit”, the claim element that performs the functions in the body of the claim is the “instruction executing unit” and not the “memory”, as the memory merely provides instructions to the “instruction executing unit”, which then executes those instructions to perform the functions of the body of the claim.
Because this claim limitation is being interpreted under 35 U.S.C. 112(f), it is being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation to avoid it being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation recites sufficient structure to perform the claimed function so as to avoid it being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 51-71 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim 51 recites, in part, “the visual inspection plan” (emphasis added). However, there are two preceding instances of “a visual inspection plan”, which makes the antecedent basis of “the visual inspection plan” unclear. For purposes of applying prior art, each instance of “visual inspection plan” in claim 51 is assumed to be the same thing. Claim 69 is rejected for the same reasons as claim 51. Dependent claims 52-68 and 70 are rejected for inheriting and not curing the deficiencies of claim 51.
Claim 51 recites, in part, “the use of visual inspection images generated using the pre-configured visual inspection resources according to their respective pre-configurations” (emphasis added). There is no explicitly-recited “use” of “visual inspection images”. Therefore, “the use” lacks antecedent basis. For purposes of applying prior art, “the use” is interpreted as “using”. Claims 69 and 71 are rejected for the same reasons as claim 51. Dependent claims 52-68 and 70 are rejected for inheriting and not curing the deficiencies of claim 51.
Claim 51 recites, in part, “the pre-configured visual inspection resources” (emphasis added), which lacks antecedent basis because the preamble and body of the claim each recite “pre-configured visual inspection resources”. For purposes of applying prior art, each instance of “pre-configured visual inspection resources” is assumed to be the same thing(s). Dependent claims 52-68 and 70 are rejected for inheriting and not curing the deficiencies of claim 51.
Claim 57 recites, in part, “the estimating comprises determining if the camera viewing angles of the configurations are within one or more ranges defined by the camera viewing angles of the imaging requirements” (emphasis added). It is unclear if the “one or more ranges” refers to ranges of viewing angles, ranges of image features at specific viewing angles, or something else. It is also unclear if “defined” means the ranges are explicitly set forth in the “imaging requirements” or that the ranges are derived from the “imaging requirements” through some other process. Given that the values of the “camera viewing angles” and “imaging requirements” are unspecified in the claim, it is unclear what “one or more ranges” would encompass. Different views of the same target object establishes a range of camera viewing angles. For example, if an inspection plan provides that the entire viewable/visible surface of the target object should be covered by different camera views to maximize coverage, then any one particular view of the object that is specified by imaging parameters or an inspection plan would fall in the range of the set of views that yield full coverage. For purposes of applying prior art, “the estimating comprises determining if the camera viewing angles of the configurations are within one or more ranges defined by the camera viewing angles of the imaging requirements” is interpreted to mean that the inspection plan and imaging requirements specify camera positions and/or orientations for an expected appearance of the target object from each respective camera’s view.
Claim 58 recites, in part, “categorizing the visual inspection requirements by the computer, based on the estimating by the computer; and providing the categorizations” (emphasis added). The claim term “the categorizations” lacks antecedent basis because the preceding term “categorizing” does not imply any particular number or set of categorizations. It is unclear what “categorizing” means because the criteria or metric of the categorization is unspecified. For purposes of applying prior art, “categorizing the visual inspection requirements” is interpreted to mean that the inspection requirements are organized, labeled, or otherwise indicated as corresponding to at least one pre-defined categorization. Dependent claims 59-63 are rejected for inheriting and not curing the deficiencies of claim 58. Claims 59-61 in particular do not cure the deficiencies of claim 58 because each claim only sets forth how a subset of the visual inspection requirements are categorized without specifying how the remaining requirements would be organized.
Claims 59-65 and 68 each respectively recite, in part, one of “its imaging requirement set”, “its image input requirement set”, “said image input requirement set”, “the one or more imaging parameter sets”, “the image input requirement sets”, “the one or more imaging parameter sets” or similar variant (emphasis added). The claims do not explicitly recite any preceding instances of a “set” or “sets”. Thus, the antecedent basis and meaning of these limitations is unclear. For purposes of applying prior art, these limitations are assumed to refer to the “one or more imaging requirements” or “imaging parameters” of claim 51. Dependent claims 62 and 63 are rejected for inheriting and not curing the deficiencies of claim 61. Dependent claim 66 is rejected for inheriting and not curing the deficiencies of claim 65.
Claim 62 recites, in part, “providing a specification of imaging parameters for additional visual inspection images of the inspected item that would complete fulfilment of said image input requirement set” and claim 65 recites, in part, “providing a specification of imaging parameters for inspection images that would complete fulfilment of at least one of the image input requirement sets.” (emphasis added). The “specification” is defined in the claim in terms of an achieved result, i.e., “complete fulfillment”, without providing any details as to how the “imaging parameters” are specified, how “complete fulfillment” is achieved and what it is defined by. It is unclear if claim 65 requires additional or new “imaging parameters” beyond those that exist prior to the “providing a specification” step. For example, testing an inspection plan may yield incomplete fulfillment in the event a non-target object is occluding the view of a camera aimed at the target object, thereby failing an inspection requirement. If the occlusion is noticed by a user in a GUI that displays collected image sensor data and subsequently moved out of the camera’s view, then the same imaging parameters could complete fulfillment, whereas in the prior instance they did not. In another example, a modified inspection plan may append additional imaging parameters to promote fulfillment. In that case the common parameters of the pre and post-modified inspection plans are needed to complete fulfillment as much as the additional parameters are needed. In another example, if the initially-provided parameters fulfill the requirements, then claim 65 is satisfied because they are already provided/specified. Thus, for purposes of applying prior art, claim 65 is interpreted to mean that imaging parameters are provided that have the capacity to complete fulfillment. Also for purposes of applying prior art, claim 62 is interpreted to mean that imaging parameters of additional visual inspection images are provided that have the capacity to complete fulfillment. Dependent claims 63 and 66 are rejected for inheriting and not curing the deficiencies of claims 62 and 65 respectively.
Claim 69 recites, in part, “selecting images potentially useful for fulfilling at least one of the visual inspection requirements, using the correspondences” and “estimated to be useful in fulfilling the visual inspection requirements” (emphasis added). The phrases “potentially useful” and “useful” are vague and make the boundaries of the claim’s scope unclear. The claim does not specify any criteria or metric by which to quantify usefulness or how the “correspondences” dictate usefulness or lack thereof. The term “potentially” implies the selected images could not be useful and also implies that an unclaimed step or series of steps is needed to make that determination, but no such steps are recited, to determine if the potential usefulness is in fact useful. Claim 69 also recites, in part, “estimating utility” and “the estimated utility” (emphasis added) and similarly does not define or indicate by what criteria the “utility” is measured, or what or who is making that determination. For purposes of applying prior art, “selecting images potentially useful” is interpreted to refer to images selected by the computer for further processing or analysis, “useful” is interpreted to mean “can be used” (and therefore not required to be used) and “utility” as referring to an improvement in defect detection if the selected image is used.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 51, 53, 55-66, 69 and 71 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Pat. Appl. Pub. No. 20190213724 to Avrahami et al. (hereinafter “Avrahami”).
Regarding claim 51, Avrahami teaches a method of integrating image outputs of pre-configured visual inspection resources into a visual inspection plan for an inspected item, the method comprising:
accessing (par. 63, “In the referenced ‘online’ process, the system can carry out, apply, or otherwise perform the referenced inspection plan by acquiring the required sensory data of a part to be inspected (e.g., at 118 in FIGS. 1 and 410 in FIG. 4) (e.g., by capturing or otherwise acquiring images and/or other sensor data (e.g., at 120 in FIGS. 1 and 412 in FIG. 4), such as in a particular sequence, as dictated by the generated inspection plan)”), by a computer (par. 97, “Processor 210 serves to execute instructions for software that can be loaded into memory 220”):
one or more imaging requirements (Imaging requirements, e.g., expected camera position and orientation (pose), are derived/defined by the inspection plan generated in the offline phase. See par. 51) derived from visual inspection requirements for the inspected item (The inspection plan is derived from the testing requirements. See FIG. 2. The testing requirements set forth visual inspection requirements for specific parts or components of the target object being inspected. See pars. 49-50.), and
imaging parameters (Parameters used to generate inspection images using the inspection plan are imaging parameters. See par. 32, “those parameters/techniques that can be determined to provide high quality/optimal results for a specific inspection context/scenario (e.g., the inspection of a screw, a connector, a reflective surface, a certain material type, etc.) can be selected and incorporated into an inspection plan for the referenced part to be inspected, while those inspection parameters/techniques determined to provide sub-optimal results (e.g., as reflected in the referenced ‘knowledge base’) may not be incorporated into the inspection plan.”; See par. 30, “various possible collections of parameters (camera-angle, illumination strength, exposure time, focus, working distance, required resolution, etc.)”) specifying configurations of respective pre-configured visual inspection resources (The system includes multiple existing/pre-configured cameras in an online inspection mode that implement inspection plans and their respective imaging parameters, where multiple cameras can be included on a single optical head or arranged separately. See pars. 24-25. Different imaging modalities may be used. See par. 52. A combination of fixed/static and moveable cameras may be used. See par. 25. Additionally, pre-configured visual inspection resources, e.g., cameras, that are part of an existing inspection plan or a plan that is being implemented online may be modified. See par. 57.) used to generate visual inspection images of the inspected item (pars. 54-55, “The referenced configurations/positionings (e.g., of the optical head) can be ordered into a sequence so as to enable the inspection system to pass through them and complete the scanning of the product in a quick/efficient manner. In certain implementations, aspects, features, limitations, etc. of the underlying hardware platform (e.g., of the inspection system) can be accounted for. ... The system can then sample (e.g., capture or otherwise acquire or receive images of and/or other sensor inputs pertaining to) the part at the referenced positions (e.g., at 114 in FIGS. 1 and 216 in FIG. 2), thereby generating reference data (e.g., at 116 in FIG. 1, 218 in FIG. 2, and 408 in FIG. 4).”);
determining correspondences between the imaging requirements (e.g., camera pose dictated by the inspection plan. See par. 51) and the imaging parameters (An inspection plan can be modified to improve defect classification performance and/or better conform to requirements provided by a manufacturer of the target object. See pars. 32 and 60. A user may modify the inspection plan using a GUI to review the image data and detection results. See par. 83. Modifications include capturing additional images, capturing images from additional angles and changing the amount of focus. See par. 63. The user’s feedback initiates machine learning to subsequently adapt an inspection plan. See FIG. 4, steps 420, 422, 424, 426 and 428. An acceptance at step 422 informs the machine learning model(s) of a correspondence between the requirements of the inspection plan and the actual data generated by implementing the plan in a real inspection system. A rejection at step 422 informs the ML model(s) of a lack of correspondence.), each correspondence being established by:
a visual inspection image (See FIG. 4, step 412; see also FIG. 8) produced according to the imaging parameters shows (depicts or includes) an inspection target which is a portion of the inspected item (par. 61, “also depicted in FIG. 8, various areas, regions, etc. 820 within the reference part 810 can be identified. Such areas or regions 820 can correspond to points or aspects of interest within the reference part with respect to which additional and/or different types of analysis are to be applied.”), and
the inspection target is shown as specified by at least one of the imaging requirements (Accepting a defect result (correspondence) includes cases where the user confirms that a manufacturer’s testing requirements are satisfied, e.g., a top surface of a product is free of scratches. See par. 49);
estimating, using the correspondences, fulfilment of the imaging requirements by visual inspection images generated using the imaging parameters (The imaging requirements are fulfilled when the user has high confidence in the data, e.g., after making any needed adjustments to the inspection plan or performing learning of the ML model(s). Determining that the ML model(s) need further training is an estimation that complete fulfillment has not been achieved or confirmed. The stored or aggregated data of identified defects can be applied, using machine learning, to predict future defects and/or make suggestions of different product shapes to mitigate those defects. See par. 85. In order to make suggestions in response to an observed defect, the inspection requirements of an initial or subsequently-modified inspection plan are necessarily fulfilled, otherwise there would be no identified defects upon which to base the suggestions for an improved shape.); and
generating a visual inspection plan fulfilling the visual inspection requirements (In the event there is uncertainty in fulfilling the requirements of an inspection plan, e.g., accurately inspecting a particular surface for a known defect or lack thereof, additional images may be specified in a modified inspection plan. See par. 63. Supplementing and modifying an inspection plan to improve its performance or compliance with testing requirements is a generation of a visual inspection plan fulfilling the visual inspection requirements.), the visual inspection plan including the use of visual inspection images generated using the pre-configured visual inspection resources according to their respective pre-configurations (The decision to add more images to the inspection plan necessarily requires using the prior inspection images.).
Regarding claim 53, Avrahami teaches the method of claim 51, wherein the determining correspondences comprises analyzing visual inspection images generated according to the specifying imaging parameters (See FIG. 1, steps 124, 128, 130, 132, 134 and 136).
Regarding claim 55, Avrahami teaches the method of claim 51, wherein:
each of the visual inspection requirements specifies:
at least one component to be inspected (par. 49, “parts and sub-parts”) and
a visual inspection procedure testing the at least one component (par. 51, “The referenced acquisition/inspection plans can define the configurations, settings and/or positions with respect to which the referenced requirements can be tested based on the model. Examples of such settings include, but are not limited to, position, orientation and activation of the optical head, as well as the sequence in which they are to be utilized.”); and
the imaging requirements specify images of the at least one component used in the visual inspection procedure (The inspection plan is generated in response to (specifies) receiving the image data of the golden parts and CAD model, which then causes more images to be acquired in the online phase based on the requirements of the inspection plan. See FIG. 1.).
Regarding claim 56, Avrahami teaches the method of claim 51, wherein:
the one or more imaging requirements each respectively specify at least one camera viewing angle of the inspected item (The inspection plans specify different camera angles relative to the target object. See par. 52, “inspection plans can be generated which ensure that images are captured at the proper angles (e.g., to ensure that the inside of the connector is visible in the proper way)”);
the imaging parameters of each configuration respectively specify at least one camera viewing angle of the inspected item (par. 51, “The referenced acquisition/inspection plans can define the configurations, settings and/or positions with respect to which the referenced requirements can be tested based on the model. Examples of such settings include, but are not limited to, position, orientation and activation of the optical head, as well as the sequence in which they are to be utilized.”); and
the estimating comprises comparing camera viewing angles of the configurations (image sensor data of each configured camera) to camera viewing angles of the imaging requirements (An actuated camera is moved to predetermined positions and poses according to the inspection plan. Fixed/static cameras would remain in place. During the online phase, reference data including a CAD model of the target object and sample images acquired using the model during the offline phase, is analyzed in relation to image data acquired during the online phase. See par. 45. The images at each camera viewing angle are then analyzed according to the testing requirements, thereby comparing each camera configuration to its corresponding required viewing angle. See par. 65, “analysis of the acquired images and other sensory data in view of/with respect to the defined testing requirements”.).
Regarding claim 57, Avrahami teaches the method of claim 56, wherein the estimating comprises determining if the camera viewing angles of the configurations are within one or more ranges defined by the camera viewing angles of the imaging requirements (The inspection plan’s requirements may specify that full coverage of the target object is obtained. See pars. 52 and 64. The imaging requirements also specify particular camera poses. The orientation component of pose is a camera viewing angle included in the set of all angles that fully cover the target object. If the inspection plan is operating nominally for multiple cameras, as indicated by a detected defect or other feature specified by the inspection plan, then the camera viewing angles of those cameras would be a subset of camera viewing angles that fall within the larger set of angles that achieve full coverage of the target object.).
Regarding claim 58, Avrahami teaches the method of claim 51, comprising categorizing the visual inspection requirements by the computer, based on the estimating by the computer; and providing the categorizations (The computer receives inspection requirements and determines the fastest sequence and path to pass through each required camera position, thereby sorting or categorizing the referenced inspection plan. See pars. 51 and 54.).
Regarding claim 59, Avrahami teaches the method of claim 58, wherein the categorizing comprises identifying one or more of the visual inspection requirements as having none of the imaging requirements of its imaging requirement set fulfilled (Testing requirements may include checking certain areas of the target object for the presence of scratches or whether scratches exceed a specified length threshold. See pars. 20, 27 and 49. Both cases are a binary decision— either a scratch is present or not present, or a scratch satisfies the threshold or does not satisfy the threshold. Amongst the one or more imaging requirements, each requirement can be viewed as its own set. Therefore, if one requirement is not met, e.g., a particular object area of a golden part must be detected as being free of scratches but scratches are detected anyway, then the set is not fulfilled. Other sets are subject to the same reasoning. See e.g., par. 73, “a part (e.g., various sensor inputs captured/received with respect to such a part) (and/or an area or region thereof) can be processed, analyzed, etc. in order to determine whether or not (and/or the degree to which) the part contains/reflects certain absolute criteria (e.g., whether the color of the part is red, whether scratches are present on the part, etc.).”).
Regarding claim 60, Avrahami teaches the method of claim 58, wherein the categorizing comprises identifying one or more of the visual inspection requirements as having all of the image input requirements of its image input requirement set fulfilled (Based on similar reasoning provided for claim 59, a requirement set is fulfilled, for example, when a scratch is not present, and not fulfilled when a scratch is present. See par. 73. Thus all of the “one or more” sets are respectively fulfilled or not fulfilled.).
Regarding claim 61, Avrahami teaches the method of claim 58, wherein the categorizing comprises identifying a visual inspection requirement as having image input requirements of its image input requirement set partially fulfilled (Based on similar reasoning provided for claims 59 and 60, a requirement set is fulfilled, for example, when a scratch is not present, and not fulfilled when a certain part is not the expected color of red. See par. 73. Thus, having one requirement fulfilled and another requirement not fulfilled, the collective set of requirements is partially fulfilled.).
Regarding claim 62, Avrahami teaches the method of claim 61, comprising providing a specification of imaging parameters for additional visual inspection images of the inspected item (par. 63, “taking additional images”) that would complete fulfilment of said image input requirement set (Reducing uncertainty to conform to a manufacturer’s requirements would fulfill those requirements. See par. 63).
Regarding claim 63, Avrahami teaches the method of claim 62, comprising:
accessing one or more first images from visual inspection images generated using the one or more imaging parameter sets, and which are images that partially fulfill the image input requirements of the image input requirement set (During the learning phase, a user interacts with a GUI to access and review collected camera sensor data (images) and defect classifications made by the computer. See FIG. 4 and par. 83.);
accessing one or more second images from the additional visual inspection images (After receiving user feedback in step 422, the ML model(s) are updated and the process continues to collect images in the online phase. See FIG. 4 and par. 81.); and
automatically analyzing the first and second images to fulfill the visual inspection requirement (The analysis at step 416 occurs automatically by the computer. See FIG. 4 and par. 81).
Regarding claim 64, Avrahami teaches the method of claim 51, wherein the generating comprises generating an inspection plan specifying collection by a robotic imaging system of at least one additional visual inspection image, to complete fulfilment of at least one of the image input requirement sets requirements (In the event there is uncertainty in fulfilling the requirements of an inspection plan, additional images may be specified in a modified inspection plan. See par. 63. Supplementing and modifying an inspection plan to improve its performance or compliance with testing requirements is a generation of a visual inspection plan fulfilling the requirements.).
Regarding claim 65, Avrahami teaches the method of claim 51, comprising providing a specification of imaging parameters for inspection images that would complete fulfilment of at least one of the image input requirement sets (If a specified imaging parameter, e.g., a level of focus, included as part of a provided testing specification, causes the user to reject the system’s defect determination from sensor data collected according to the specification, then during inspection, “the system can be further adapted/configured to the requirements, standards, and/or preferences of a particular manufacturer.” See pars. 30 and 83; see also FIG. 4 and par. 48. The plan may then be modified to include additional imaging parameters that cause additional and more focused images to be taken. See par. 63. Thus, the particular manufacturer’s requirement is satisfied/completed/fulfilled.).
Regarding claim 66, Avrahami teaches the method of claim 65, wherein the imaging parameters define configurations of pre-configured visual inspection resources that include at least one of: one or more fixed-position cameras (par. 25, “one or more static cameras ... e.g., in fixed positions, surrounding the part to be inspected”), and one or more cameras preconfigured to move along a predefined path (par. 25, “one or more static cameras/illumination devices (not shown) may be implemented in addition to/instead of the maneuverable optical head (for example, multiple cameras and/or illumination devices can be arranged, e.g., in fixed positions, surrounding the part to be inspected).”).
Claim 69 substantially corresponds to claim 51 by reciting a method that differs in one aspect by reciting “images” instead of “imaging parameters” in the “accessing” and “determining correspondences” steps. However, this does not affect the reasoning of the corresponding limitations in claim 51 as the images are generated based on the imaging parameters. In another aspect, the claims differ in that claim 69 further recites:
an image shows (depicts or includes) an inspection target which is a portion of the inspected item (Avrahami, FIG. 8), and the inspection target is shown in the image as specified by at least one of the imaging requirements (The camera(s) is/are configured according to the inspection plan and sampled accordingly. See Avrahami at par. 24);
selecting images potentially useful for fulfilling at least one of the visual inspection requirements, using the correspondences (The online phase, which acquires/selects images for inspection, is automated. See Avrahami at par. 42);
using the selected images, calculating results of automated visual inspection procedures configured to fulfill the visual inspection requirements (Step 422 is the user providing an indication of validity, i.e., accept or reject, to the computer, which then updates the ML model(s) to continue automatically analyzing the collected sensor data. See Avrahami at FIG. 4 and par. 81. The defect determinations presented to the user are generated from the selected/acquired images.);
estimating utility of the selected images in fulfilling the visual inspection requirements, using the calculated results (The ML learning uses selected images to improve the detection results. See Avrahami at par. 81, “the learning process can also be used to adapt or alter the inspection plan (e.g., at 136 of FIG. 1 and 428 of FIG. 4). For example, areas which are determined or otherwise identified to be more prone to errors can be inspected at a higher quality.”; The user may, for example, indicate certain selected images as false positives or false negatives by rejecting the automated result. By providing the binary indication of accepting or rejecting a selected image for an error prone area, the ML model(s) learns from the feedback to “further improve the detection/determination process.” Avrahami at par. 81. Thus, the user’s feedback provides an indication or estimate of the selected image’s ability to improve the ML defect detection model(s), i.e., utility.); and
generating, guided by the estimated utility of the selected images (Avrahami - Using the updated ML model(s)), a visual inspection plan fulfilling the visual inspection requirements (Avrahami, par. 81, “adapt or alter the inspection plan”), the visual inspection plan including the use of visual inspection images generated according to imaging parameters used to generate at least one of the selected images (Online phase. See Avrahami at FIG. 4), and estimated to be useful in fulfilling the visual inspection requirements (Images and automatically identified defects are presented to the user in the learning phase. See Avrahami at FIG. 4. By making an automatic inspection decision about an acquires inspection image before receiving user input, the system estimates that there is at least some degree of usefulness to the user should they choose to examine it further when providing feedback. The user’s feedback enhances the system’s decision accuracy when it is used to update the ML model(s). See Avrahami at par. 83, “a user providing such feedback may have access to the sensory data upon which the detection result was made and/or the actual tested part. Such indications of acceptance/rejection can be collected and utilized in a machine learning process. In doing so, the system can be further adapted/configured to the requirements, standards, and/or preferences of a particular manufacturer. A successful learning process is enabled as a result of the fact that the planning process produces standardized measurements for each testing requirement which can then be processed in a uniform manner.”).
Claim 71 substantially corresponds to claim 51 by reciting a system comprising an instruction executing unit (Avrahami, par. 91, “processor”) and a memory (Avrahami, par. 91, “main memory”) which instructs the instruction executing unit to perform the method of claim 51.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 52 and 54 are rejected under 35 U.S.C. 103 as being unpatentable over Avrahami in view of U.S. Pat. Appl. Pub. No. 20190281213 to Kato (hereinafter “Kato”).
Regarding claim 52, Avrahami teaches the method of claim 51, wherein the determining correspondences comprises determination that:
imaging parameters for a visual inspection image specify a camera pose relative to the inspected item (Avrahami, par. 51, “The referenced acquisition/inspection plans can define the configurations, settings and/or positions with respect to which the referenced requirements can be tested based on the model. Examples of such settings include, but are not limited to, position, orientation and activation of the optical head, as well as the sequence in which they are to be utilized.”. A sequence of head positions and orientations is a sequence of required camera poses.), but does not teach that which is explicitly taught by Kato.
Kato teaches the camera pose matches a camera pose defined by at least one of the imaging requirements (A sequence of candidate camera poses relative to the target object is determined, where a candidate pose is restricted to a range of inclination angles relative to a surface normal of the target object. See FIG. 13 and pars. 108 and 124. Accepting a candidate means the candidate pose is a match within that range.).
Avrahami discloses automated inspection and inspection plan generation methods and systems including a plurality of fixed and/or moveable cameras. A GUI is provided for a user to modify and evaluate the quality and performance of inspection plans generated to satisfy imaging requirements, including camera pose, by providing feedback on the output of an inspection plan that incorporates particular imaging parameters. Thus, Avrahami shows that it was known in the art before the effective filing date of the claimed invention to evaluate inspection plans for multi-camera inspection systems based on expected camera poses and sensor data produced by an inspection plan requiring the camera poses, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, generating an inspection plan that fulfills visual inspection requirements. Kato discloses an automated inspection that evaluates camera position candidates and an imaging parameter of camera pose that is evaluated for camera positions at an inclination of inclination between a surface normal from the target object and the optical access of a camera. An optimal path of the camera is decided that satisfies imaging requirements, e.g., minimal operating time (par. 67) and defect size (par. 99). Thus, Kato shows that it was known in the art before the effective filing date of the claimed invention to compare an expected camera pose with collected camera sensor data constrained by imaging requirements for an inspection procedure of a target object, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, generating an inspection plan that fulfills visual inspection requirements.
A person of ordinary skill in the art would have been motivated to modify the correspondence determination disclosed by Avrahami to incorporate the camera pose matching disclosed by Kato to thereby confirm that an inspection plan fulfills its requirements or modify an inspection plan to more closely or completely fulfil the visual inspection requirements by quantifying the quality of the generated inspection plan according to the difference between an expected camera pose and test data for the expected pose. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of more precisely determining that a specified camera pose for inspection is valid.
Regarding claim 54, Avrahami teaches the method of claim 53, wherein the analyzing comprises mapping features of the visual inspection images to corresponding features of a model of the inspected item (Avrahami, par. 42, “CAD model”), but does not teach that which is explicitly taught by Kato.
Kato teaches a 3-D model of an inspected item (Kato, par. 61, “The target position decision part 63 reads 3-dimensional design data (for example, computer-aided design (CAD) data) indicating the designed surface of the workpiece W stored in the storage part 62 and causes the display part 61 to display a schematic diagram of the designed appearance of the workpiece W. The target position decision part 63 decides an inspection target region on the workpiece W in accordance with an input by a user.”).
Avrahami and Kato are analogous to the claimed invention for the reasons provided above. Kato further discloses a 3-D model of a target object that is used to localize particular positions of the object relative to a camera. Thus, Kato further shows that it was known in the art before the effective filing date of the claimed invention to generate and evaluate inspection plans using 3-D CAD models as a reference to localize a target object within a camera’s field of view, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, generating an inspection plan that fulfills visual inspection requirements.
A person of ordinary skill in the art would have been motivated to modify the GUI and CAD model disclosed by Avrahami to instead use a 3-D CAD model as disclosed by Kato to thereby enable the user to evaluate inspection plan performance and set or modify planned camera positions. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of accurately localizing regions of the target object for inspection.
Claims 67 and 68 are rejected under 35 U.S.C. 103 as being unpatentable over Avrahami in view of WO Pub. No. 2019177539 to Cheng et al. (hereinafter “Cheng”).
Regarding claim 67, Avrahami teaches the method of claim 51, wherein the estimating comprises:
accessing at least one image from visual inspection images generated using the one or more imaging parameter sets (Avrahami - Step 416 is the computer automatically analyzing the collected sensor data. See FIG. 4 and par. 81);
automatically analyzing the at least one image according to a visual inspection requirement (Avrahami - Step 416 is the computer automatically analyzing the collected sensor data. See FIG. 4 and par. 81);
evaluating validity of a result of the automatic analyzing (Avrahami - Step 422 is the user providing an indication of validity, i.e., accept or reject, to the computer, which then updates the ML model(s) to continue automatically analyzing the collected sensor data according to the inspection plan. See FIG. 4 and par. 81), but does not teach that which is explicitly taught by Cheng.
Cheng teaches producing an estimate of the fulfilment of the imaging requirement depending on the validity of the result of the automatic analyzing (Cheng, Abstract, “processing the captured first image to detect whether the object has a defect; calculating a confidence score for the detection on whether the object has the defect; generating the calculated confidence score as a feedback to a user; and determining based on the calculated confidence score whether to generate improved imaging parameters with respect to translation and/or rotation movement of the image capturing device; generating the improved imaging parameters if required after considering the calculated confidence score; and receiving an input to instruct the image capturing device to capture, according to the improved imaging parameters, a second image of the object that results in calculation of an improved confidence score.”. Validity is determined by comparing the confidence score to a threshold. See ll. 58-60 on pg. 3).
Avrahami is analogous to the claimed invention for the reasons provided above. Cheng discloses a method and apparatus for visual inspection that provides a confidence score as feedback to a user in order to optimize the pose of a camera used in an inspection procedure. Thus, Cheng shows that it was known in the art before the effective filing date of the claimed invention to estimate the validity of defect detection results for particular imaging parameters at particular camera poses, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, generating an inspection plan that fulfills visual inspection requirements.
A person of ordinary skill in the art would have been motivated to modify the correspondence determination and GUI disclosed by Avrahami to incorporate the pose-based confidence value disclosed by Cheng with the feedback functionality of the GUI to thereby enable a user to validate or adjust the pose of an inspection camera to generate improved imaging parameters with respect to translation and/or rotation of moveable cameras or the images obtained by static/fixed cameras. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of giving the user of the GUI a simple metric to guide the optimization of an inspection plan with respect to a required set of different and specific camera angles of a target object being inspected.
Regarding claim 68, Avrahami in view of Cheng teaches the method of claim 67, wherein an invalid said result corresponds to an estimate of non-fulfilment of the visual inspection requirement's respective image input requirement set (A confidence score below the threshold indicates the result is untrustworthy and the current imaging parameters cannot fully satisfy an inspection plan. See Cheng at ll. 58-60, pg. 3).
The rationale for obviousness is the same as provided for claim 67.
Claim 70 is rejected under 35 U.S.C. 103 as being unpatentable over Avrahami in view of U.S. Pat. Appl. Pub. No. 20220005183 to Hyatt et al. (hereinafter “Hyatt”).
Regarding claim 70, Avrahami teaches the method of claim 51, wherein said pre-configured visual inspection resources are positioned to image components of the inspected item (The system includes multiple existing/pre-configured cameras that implement inspection plans and their respective imaging parameters, where multiple cameras can be included on a single optical head or arranged separately. See Avrahami at pars. 24-25 and 29), but does not teach that which is explicitly taught by Hyatt.
Hyatt teaches different stages of assembly (Hyatt, par. 101, “In step 610C, for CFG4 individual outputs will be provided for each stage of the item inspected as imaged by each inspection assembly 110 and a correlated result for each item that has progressed through multiple stages of assembly may be provided summarizing the results from all of the inspection assemblies 110 that image the item as it was manufactured.”).
Avrahami is analogous to the claimed invention for the reasons provided above. Hyatt discloses an automated visual inspection system that includes multiple cameras at different stages of production and a GUI that enables configuration of the system. Thus, Hyatt shows that it was known in the art before the effective filing date of the claimed invention to generate inspection plans for multi-stage inspection systems with multiple cameras, which is analogous to the claimed invention in that it is pertinent to the problem being solved by the claimed invention, generating an inspection plan that fulfills visual inspection requirements.
A person of ordinary skill in the art would have been motivated to modify the inspection system and GUI disclosed by Avrahami to incorporate additional stages of production and the multi-stage configuration functionality disclosed by Hyatt to thereby enable a user to generate and/or modify inspection plans that span different stages of a product’s assembly. Based on the foregoing, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have made such modification according to known methods to yield the predictable results to have the benefit of detecting defects that only arise at certain stages of production that would otherwise be missed.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN P POTTS whose telephone number is (571)272-6351. The examiner can normally be reached M-F, 9am-5pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at 571-272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN P POTTS/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672
1 See Superguide Corp. v. Direct TV Enterprises, Inc., 358 F.3d 870, 69 USPQ2d 1865 (Fed. Cir. 2004).