Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amended claims and associated applicant arguments/ remarks filed on 11/26/2025 were received and considered.
Claims 1 and 24 have been amended.
Claims 1-4, 11-13, 15-19, 21-30 are pending.
Response to Arguments
Applicant's arguments filed 11/26/2025 regarding 35 USC 101 rejection have been fully considered but they are not persuasive.
The applicant argues, specifically, on page 10 of the Remarks, regarding 101 rejection of claim 1 (which also applies to claim 24) and dependent claims 2-4, 11-13, 15-19, 21-23, 25-30:
“As explained at paragraph [0008] of the published application "[e]nabling corrections or other feedback from the user during the set up and/or inspection process (as opposed to waiting until the end of the analysis of all the images before enabling the user to introduce corrections and then waiting again for analysis of the corrected information) greatly shortens the set up and inspection processes". Taken as a whole, applicants' claimed method would thus be considered by one of ordinary skill in the art to be limited to a useful practical application, i.e., performing corrections of image data based on a comparison of reference images to each other. Applicants thus contend amended independent claims 1 and 24 include subject matter that amounts to significantly more than a mere abstract idea.”
Respectfully, examiner disagrees.
Receiving image data, analyzing the images, displaying data, receiving feedback, updating image data, and determining completion of an inspection does not provide enough support for an improvement in the functionating of a computer or a technology. According to MPEP: (2106.04(d) Integration of a Judicial Exception Into A Practical Application)
PNG
media_image1.png
332
975
media_image1.png
Greyscale
PNG
media_image2.png
108
922
media_image2.png
Greyscale
PNG
media_image3.png
118
973
media_image3.png
Greyscale
Applicant’s arguments, see Remarks, filed 11/26/2025, with respect to the rejection(s) of claim(s) 1-4, 11-13, 15-19, 21-30 under USC 103 with reference Hyatt et al. (US 20220044379 A1) referred to as Hyatt_2 have been fully considered and are persuasive. However, upon further consideration, a new ground(s) of rejection is made in view of Tanaka et al. (US 20180211374 A1).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 11-13, 15-19, 21-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental process (concept performed in a human mind, including as observation, evaluation, judgment, opinion, organizing human activity and mathematical concepts and calculations). The claim(s) recite a process configured to receive images, compare them together and receiving feedback. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory).
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
• STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
• STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
o STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
o STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
o STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that claims 1 and 24 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories? YES. Claim(s) 1 and 24 are directed to a method, i.e. process, a computer readable medium, i.e. a system.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental process (i.e. abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
• Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
• Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
• Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
The method in claim 1 (and system in claim 24) comprise a mental process that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea.
Regarding claims 1 and 24: the process recites the steps (functions) of:
A visual inspection setup process of a visual inspection system comprising a processor, a user interface device and at least camera for obtaining images of items on an inspection line, comprising:
(generic computers or components configured to perform a step)
receiving from the at least one camera, during setup of the visual inspection system, image data related to a plurality of reference images formed from image samples of defect free items, each of the reference images formed from the image samples of the defect free items including a same-type item on the inspection line; (mental process including observation, evaluation, and data gathering, and can be done mentally in the human mind)
analyzing, by the processor, the reference images formed from the image samples of the defect free items by comparing the reference images to each other; (mental process including observation and evaluation, comparing images can be done mentally in the human mind)
displaying, on the user interface device, an indication of image data related to a reference image formed from an image sample of a defect free item for confirmation by a user based on the comparison of the reference images to each other; (instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea).
receiving feedback from the user regarding the indication; (mental process including observation and evaluation, and can be done mentally in the human mind)
updating the image data related to the plurality of reference images formed from the image samples of the defect free items based on the feedback from the user; and (mental process including observation and evaluation, and modifying/ updating an image can be done mentally in the human mind or with a pen and a paper)
determining, by the processor, whether the visual inspection setup process is complete. (mental process including observation and evaluation, and can be done mentally in the human mind)
These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same). As such, a person could mentally analyze an image and determine a fill level, either mentally or using a pen and paper. The mere nominal recitation that the various steps are being executed by a device/in a device (e.g. processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Claim(s) 1 and 24 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application.
These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? NO, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Claim(s) 1 and 24 do not recite any additional elements that are not well-understood, routine or conventional. The use of a computer to “receiving, comparing, and displaying, etc.,” as claimed in Claim(s) 1 and 24 is a routine, well-understood and conventional process that is performed by computers.
Thus, since Claim(s) 1 and 24 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that Claim(s) 1 and 24 are not eligible subject matter under 35 U.S.C 101.
Regarding claims 2-4, 11-13, 15-19, 21-23, 25-30 the additional limitations do not integrate the mental process into practical application or add significantly more to the mental process.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 11, 12, 24, 25, 29, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Hyatt et al. (US 20210012475 A1) referred to as Hyatt hereinafter and further in view of Tanaka et al. (US 20180211374 A1) referred to as Tanaka hereinafter.
Regarding claim 1, Hyatt teaches A visual inspection (“automated visual inspection systems” Hyatt, para. [0003]) setup process (“set up stage” Hyatt, para. [0013]) of a visual inspection system comprising a processor, (“a processor 102” Hyatt, para. [0052]) a user interface device (“user interface 6” Hyatt, para. [0043]) and at least camera (“camera 3” Hyatt, para. [0042]) for obtaining images of items on an inspection line, comprising: (“The process includes a set up mode in which images of same-type defect free items, but not images of same-type defected items, are obtained” Hyatt, abstract)
receiving from the at least one camera, during setup of the visual inspection system, image data related to a plurality of reference images formed from image samples of defect free items, (“In the set up stage, samples of a manufactured item with no defects (defect free items) are imaged on an inspection line” Hyatt, para. [0009]), (“the system exemplified in FIG. 1C includes a processor 102 to receive image data of an inspection line from one or more image sensor, such as camera 3, to analyze the image data and to output a signal to a user interface 6.” Hyatt, para. [0052]), and (“processor 102 receives a plurality of set up images, namely, images of defect free same-type sample items (e.g., items 2 and 2′)” Hyatt, para. [0058])
each of the reference images formed from the image samples of the defect free items including a same-type item on the inspection line; (“Processor 102 receives (from one or more cameras 3) one, or in some embodiments, at least two, set up images of defect free, same-type items on an inspection line.” Hyatt, para. [0057])
analyzing, by the processor, the reference images formed from the image samples of the defect free items by comparing the reference images to each other; (“The images are analyzed by a processor and are then used as reference images for machine learning algorithms run at the inspection stage.” Hyatt, para. [0009]), (“by using a set of set up images as references for each other” Hyatt, para. [0097]), (“statistical confidence can be achieved based on comparison of set-up images to each other to determine that there are no images showing perspective distortion, to determine alignment of images, to determine correct detection of defect free items, and more, as described herein.” Hyatt, para. [0058]) and (“The borders of the spatial range may be calculated by comparing two (or more) set up images (in which sample items may be positioned and/or oriented differently) and determining which of the images show perspective distortion and which do not.” Hyatt, para. [0096])
displaying, on the user interface device, an indication of image data related to a reference image formed from an image sample of a defect free item for confirmation by a user based on the comparison of the reference images to each other; (“The first and second set up images are analyzed (124) and a signal is generated, based on the analysis. The signal may cause different outputs to be displayed to a user, based on the analysis result (125)… Outputs may be displayed on user interface 6.” Hyatt, para. [0084])
receiving feedback from the user regarding the indication; (“the processor can accept user input via the user interface and can generate a signal, based on the user input. The user input may be, for example, a desired level of accuracy required from the system, or a region of interest in the image of the defect free item.” Hyatt, para. [0027])
However, Hyatt does not teach updating the image data related to the plurality of reference images formed from the image samples of the defect free items based on the feedback from the user; and determining, by the processor, whether the visual inspection setup process is complete.
Tanaka teaches updating the image data related to the plurality of reference images formed from the image samples of the defect free items based on the feedback from the user; (“The second learning device 202 sets the defect candidate area (labeled data) labeled (selected by the user) as described above as the correct data, and the defect candidate area not labeled (not selected by the user) as the incorrect data, to learn the second model for identifying the correct data and the incorrect data.” Tanaka, para. [0036]), labeling candidate areas is the claimed updating the image data) and (“The sample image set includes not only a non-defective image but also a captured image (defect image) obtained by capturing an object having a defect, and the second learning device 202 uses the first model learned by the first learning device 201, to detect the defect candidate area from each of the plurality of captured images (sample image set) prepared in advance.” Tanaka, para. [0032])
and determining, by the processor, whether the visual inspection setup process is complete. (“Next, the determiner 205 uses the second model described above to determine whether or not the defect candidate area is a defect for each of one or more defect candidate areas detected in step S22 (step S23). Next, the output controller 206 performs control to output the determination result in step S23 (step S24).” Tanaka, para. [0043]) and fig. 7
PNG
media_image4.png
292
286
media_image4.png
Greyscale
Hyatt and Tanaka are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt in light of Tanaka’s updating reference images. One would have been motivated to do so because sufficient inspection accuracy can be obtained. (Tanaka, para. [0044])
Regarding claim 2, Hyatt teaches wherein the indication comprises an analyzed reference image with a suspected defect. (“In the inspection stage (FIG. 1B) that follows the set up stage, inspected items 4, 4′ and 4″, which are of the same type as sample items 2 and 2′ and which may or may not have defects, are imaged in succession by camera 3 and these images, which may be referred to as inspection images, are analyzed using computer vision techniques (e.g., machine learning processes) to detect defects in items 4, 4′ and 4″. In the example illustrated in FIG. 1B, item 4′ includes a defect 7, whereas items 4 and 4″ are defect free.” Hyatt, para. [0044])
Regarding claim 3, Hyatt teaches wherein the indication comprises a probability indication of the suspected defect in the analyzed reference image. (“The threshold may be predetermined (e.g., a preset probability) or may be adjustable or dynamic. For example, a user may input (e.g., via user interface 6) a desired level of accuracy required from the inspection system and the threshold for the probability of no false positives being detected, is set according to the user input.” Hyatt, para. [0102])
Regarding claim 11, Hyatt teaches wherein the indication comprises data about positioning of an item on the inspection line. (“samples of a manufactured item with no defects (defect free items) are imaged on an inspection line, the same inspection line or an inspection line having similar set up parameters to those being used for the inspection stage.” Hyatt, para. [0009])
Regarding claim 12, Hyatt teaches further comprising: detecting the positioning of the item on the inspection line, in the reference image based on the comparison; (“processor 102 may detect that there is a requirement for another sample item/s in an area in the FOV 3′, to broaden the range, so that samples placed near that area in the FOV will not be detected as showing perspective distortion. Processor 102 may generate a signal to request a sample to be placed in that area to obtain the missing information. Thus, for example, a signal may be generated to cause an image of the inspection line to be displayed (e.g. on user interface 6) with a mark of a location and/or orientation so that a user can place a third (or next) defect free sample item on the production line at the location and/or orientation marked in the image displayed to him.” Hyatt, para. [0100])
displaying, on the user interface device, the indication of the reference image for confirmation by the user: (“the signals generated based on the comparison of sample images may cause notifications, rather than instructions, to be displayed or otherwise presented to a user.” Hyatt, para. [0101]), (“providing feedback to a user, prior to (and during) the inspection stage” Hyatt, para. [0086]), and (“analyze the image data and to output a signal to a user interface 6.” Hyatt, para. [0052])
receiving feedback from the user regarding the indication; and (“the processor can accept user input via the user interface and can generate a signal, based on the user input. The user input may be, for example, a desired level of accuracy required from the system, or a region of interest in the image of the defect free item.” Hyatt, para. [0027])
However, Hyatt does not teach updating the plurality of reference images formed from the image samples of the defect free items based on the feedback from the user; and
Tanaka teaches updating the plurality of reference images formed from the image samples of the defect free items based on the feedback from the user; and (“The second learning device 202 sets the defect candidate area (labeled data) labeled (selected by the user) as described above as the correct data, and the defect candidate area not labeled (not selected by the user) as the incorrect data, to learn the second model for identifying the correct data and the incorrect data.” Tanaka, para. [0036]), labeling candidate areas is the claimed updating the image data) and (“The sample image set includes not only a non-defective image but also a captured image (defect image) obtained by capturing an object having a defect, and the second learning device 202 uses the first model learned by the first learning device 201, to detect the defect candidate area from each of the plurality of captured images (sample image set) prepared in advance.” Tanaka, para. [0032])
Hyatt and Tanaka are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt in light of Tanaka’s updating reference images. One would have been motivated to do so because sufficient inspection accuracy can be obtained. (Tanaka, para. [0044])
Regarding claim 24, Hyatt teaches a user interface device in communication with the processor, the user interface device comprising a display showing progress of the visual inspection setup process, during setup of the visual inspection system, (“The processor 102 then analyzes the set up images and generates a signal to display on the user interface 6” Hyatt, para. [0057])
Regarding rest of claim 24, refer to the explanation of claim 1.
Regarding claim 25, Hyatt teaches wherein the processor compares the reference images to each other. (“A third set up image is compared to the first and second set up images (306) to determine the perspective distortion of the item in the third image relative to the first and second set up images.” Hyatt, para. [0098], set images is the claimed reference images)
Regarding claims 29, Hyatt teaches wherein the indication further includes a probability indication of a suspected defect in the analyzed reference image being an actual defect. (“another set up image is determined to be needed in step (309) depending on the probability that a same-type item can be detected in a new image and that no false positives will be detected in a new image of a same-type defect free item. If the probability is below a threshold, a signal may be generated to cause the set up mode to continue (312) and possibly no notification and/or instruction is displayed to a user. If the calculated probability is above the threshold, a signal may be generated to switch to inspection mode (314) and possibly a notification to be displayed that the inspection stage may be started.” Hyatt, para. [0101])
Regarding claim 30, refer to the explanation of claim 29.
Claim(s) 4, 15-18, 22, 23, 26, and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Hyatt and Tanaka as mentioned above and further in view of Chen et al. (US 9886771 B1) referred to as Chen hereinafter.
Regarding claim 4, the combination of Hyatt and Tanaka does not teach the feedback from the user further comprising: deleting the analyzed reference image; and updating a reference image database.
However, Chen teaches the feedback from the user further comprising: deleting the analyzed reference image; and updating a reference image database. (“FIG. 12 depicts the corrected image of the damaged automobile of FIG. 7 being analyzed with background effects detected and deleted.” Chen, col. 4, lines 17-19)
Hyatt, Tanaka, and Chen are combinable because they are from the same filed of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Chen’s deleting an analyzed image. One would have been motivated to do so because it can more accurately determine whether a set of target vehicle images has been obtained. (Chen, col. 16, lines 45-46)
Regarding claim 15, the combination of Hyatt and Tanaka does not teach receiving a user request at a time point during the process; and displaying on the user interface device reference images received up to the time point based on the user request.
Chen teaches receiving a user request at a time point during the process; (“The interface 1960 may include an “okay” selection 1962 that, upon selection, may cause the electronic device to display an interface 1965 as illustrated in FIG. 19D. Similar to the interface 1950, the interface 1965 may include a “live view” of image data that is captured, in real-time, by an image sensor of the electronic device.” Chen, col. 43, lines 18-23)
and displaying on the user interface device reference images received up to the time point based on the user request. (“As illustrated in FIG. 3, after one or more of the target vehicle images have been processed, a block 316 may display the results of the analysis to a user.” Chen, col. 28, lines 52-54)
Hyatt, Tanaka, and Chen are combinable because they are from the same filed of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Chen’s receiving user request. One would have been motivated to do so because it can more accurately determine whether a set of target vehicle images has been obtained. (Chen, col. 16, lines 45-46)
Regarding claim 16, the combination of Hyatt and Tanaka does not teach displaying a button on the user interface device, the button, if pressed by the user, generating the user request.
Chen teaches displaying a button on the user interface device, the button, if pressed by the user, generating the user request. (“The interface 1960 may include an “okay” selection 1962 that, upon selection, may cause the electronic device to display an interface 1965 as illustrated in FIG. 19D. Similar to the interface 1950, the interface 1965 may include a “live view” of image data that is captured, in real-time, by an image sensor of the electronic device. Further, the interface 1965 may include an instruction 1966 that instructs the individual to capture a front left view of the target vehicle, as well as a shutter selection 1967 that, upon selection, causes the electronic device to generate an image from the image data captured by the image sensor. In the example embodiment as illustrated in FIG. 19D, the “live view” may depict the front left view of the target vehicle and, in response to the user selecting the shutter selection 1967, the interface 1965 may display a confirmation 1968 that the image was captured.” Chen, col. 43, lines 18-33)
Hyatt, Tanaka, and Chen are combinable because they are from the same filed of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Chen’s displaying a button. One would have been motivated to do so because it can more accurately determine whether a set of target vehicle images has been obtained. (Chen, col. 16, lines 45-46)
Regarding claim 17, Hyatt teaches displaying, on the user interface device, a probability indication of the suspected defect in analyzed reference images and a probability of the suspected defect being an actual defect. (“another set up image is determined to be needed in step (309) depending on the probability that a same-type item can be detected in a new image and that no false positives will be detected in a new image of a same-type defect free item. If the probability is below a threshold, a signal may be generated to cause the set up mode to continue (312) and possibly no notification and/or instruction is displayed to a user. If the calculated probability is above the threshold, a signal may be generated to switch to inspection mode (314) and possibly a notification to be displayed that the inspection stage may be started.” Hyatt, para. [0101])
Regarding claim 18, the combination of Hyatt and Tanaka does not teach displaying on the user interface device, an unknown position indication of the reference images.
However, Chen teaches displaying on the user interface device, an unknown position indication of the reference images. (“this overlay may be used to eliminate pixel information in the target object image that is not associated with the target object, such as background information or pixels, foreground pixel information, etc. ... The pixels outside of these edges are then designated as background pixels, not associated with the target object or with the component of the target object.” Chen, col. 12, lines 13-16, 27-30)
Hyatt, Tanaka, and Chen are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Chen’s displaying unknown position indication of the reference images. One would have been motivated to do so because it can more accurately determine whether a set of target vehicle images has been obtained. (Chen, col. 16, lines 45-46)
Regarding claim 22, the combination of Hyatt and Tanaka does not teach displaying, on the user interface device, a heat map of positioning of the item in the plurality of reference images.
However, Chen teaches displaying a heat map of positioning of the item in the plurality of reference images. (“a user interface for providing vehicle damage information includes a heat map that is applied to an image of a vehicle and that is presented with the image of the vehicle on a display.” Chen, col. 3, lines 1-4)
Hyatt, Tanaka, and Chen are combinable because they are from the same filed of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Chen’s displaying heatmap. One would have been motivated to do so because it can more accurately determine whether a set of target vehicle images has been obtained. (Chen, col. 16, lines 45-46)
Regarding claim 23, the combination of Hyatt and Tanaka does not teach displaying, on the user interface device, an image indicating an orientation mark on an item of symmetric shape, in an image.
Chen teaches displaying, on the user interface device, an image indicating an orientation mark on an item of symmetric shape, in an image. (“Each base object model may include a plurality of landmarks defined therein, which each landmark defining a particular point or spot on the base object.” Chen, col. 2, lines 17-19 , a vehicle has a symmetric shape, fig. 4)
Hyatt, Tanaka, and Chen are combinable because they are from the same filed of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Chen’s displaying an orientation mark. One would have been motivated to do so because it can more accurately determine whether a set of target vehicle images has been obtained. (Chen, col. 16, lines 45-46)
Regarding claim 26, Hyatt teaches wherein the indication of the reference images consist of defect-free items; (“In the set up stage, samples of a manufactured item with no defects (defect free items) are imaged on an inspection line” Hyatt, para. [0009])
and wherein the processor detects a suspected defect in of one of the reference images, based on the comparison, (“Some automated visual inspection solutions compare an image of an inspected article to an image of a defect free article and/or use databases of images of possible defects, by which to detect defects in an inspected article.” Hyatt, para. [0005]), (“inspected items (manufactured items that are to be inspected for defects) are imaged and the image data collected from each inspected item is analyzed by computer vision algorithms such as machine learning processes, to detect one or more defects on each inspected item.” Hyatt, para. [0010])
However, the combination of Hyatt and Tanaka does not teach and causes an enlargement of an area of the suspected defect to be displayed.
Chen teaches and causes an enlargement of an area of the suspected defect to be displayed. (“the image processing system may optionally ask for or request zoom-in photos of a damaged or changed area of the object to further identify and quantify the damage type/severity” Chen, col. 15, lines 20-23)
Regarding claim 27, Hyatt teaches wherein the processor causes display of the indication with one of the reference images on the user interface device, with no graphics occluding the area of the suspected defect. (“illustrated in FIG. 1B, item 4′ includes a defect 7, whereas items 4 and 4″ are defect free.” Hyatt, para. [0044], the inspection image of 4’ has no graphics occluding the area of the suspected defect)
Claim(s) 13, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Hyatt and Tanaka as mentioned above and further in view of Kawabata et al. (US 20150221077 A1) referred to as Kawabata hereinafter.
Regarding claim 13, the combination of Hyatt and Tanaka does not teach wherein the indication comprises progress of the visual inspection process based on a number of reference images received.
However, Kawabata teaches wherein the indication comprises progress of the visual inspection process based on a number of reference images received. (“the job progress management report (9) is output based on the job directive commands (step S116). A progress management table representing confirming the image folder for inspection, the first proof and the final proof in the inspection-image is automatically prepared by using the job progress management report. This information is transmitted to the progress management system and so on, and is used to confirm to what extent each job progresses in work schedule.” Kawabata, para. [0231])
Hyatt, Tanaka, and Kawabata are combinable because they are from the same field of endeavor, image processing.
At the time of the invention was filed, it would have been obvious to a person of ordinary skill in the art to modify Hyatt and Tanaka in light of Kawabata’s progress of inspection. One would have been motivated to do so because it can improve efficiency of inspection operation. (Kawabata, para. [0181])
Regarding claim 19, Hyatt teaches displaying, on the user interface device, a reference image with a suspected defect together with other reference images of the same-type item (“The first and second set up images are analyzed (124) and a signal is generated, based on the analysis. The signal may cause different outputs to be displayed to a user, based on the analysis result (125). If the analysis provides a first result (result A) then a first output (output A) is displayed (126). If the analysis provides a second result (result B) then a second output (output B) is displayed (128).” Hyatt, para. [0084])
However, the combination of Hyatt and Tanaka does not teach displaying as an animation.
Kawabata teaches display as an animation, (“the two-dimensional image may be mapped to a CAD image based on CAD information of the product. The coordinate representing shapes of various three-dimensional products is stored as CAD information, and image mapping is performed using the information as reference-image data of three-dimensional products having the same shape.” Kawabata, para. [0341], CAD can be used to make animations.)
Hyatt, Tanaka, and Kawabata are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Kawabata’s displaying an animation. One would have been motivated to do so because it can improve efficiency of inspection operation. (Kawabata, para. [0181])
Regarding claim 21, the combination of Hyatt and Tanaka does not teach displaying, on the user interface device, a difference image of reference images.
Kawabata teaches displaying, on the user interface device, a difference image of reference images. (“The image producing section 5 produces different point displaying image data at each threshold value in order to further compare a plurality of different point displaying image data which produce differences in a plurality of threshold values (TH1-THn) with each other.” Kawabata, para. [0064]) and (“User interface (UI) for system operation includes an operation menu, a three-dimensional display screen, and a one way display screen. The operation menu includes a selection of a display image, a selection of a display method, and so on.” Kawabata, para. [0353])
Hyatt, Tanaka, and Kawabata are combinable because they are from the same field of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Kawabata’s displaying difference image of the reference images. One would have been motivated to do so because it can improve efficiency of inspection operation. (Kawabata, para. [0181])
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over Hyatt and Tanka as mentioned above and further in view of Wardell et al. (US 20190164270 A1) referred to as Wardell hereinafter.
Regarding claim 28, the combination of Hyatt and Tanka does not teach wherein the display shows the progress moving forward or moving back based on the user feedback.
Wardell teaches wherein the display shows the progress moving forward or moving back based on the user feedback. (“At 230, the system may determine if there is a potential defect via an automated inspection process. If there is no defect determined, the part is generally accepted and the method returns to examine the next part. If the presence of a defect is determined, at 240, one or more images are then displayed to an operator to review and determine whether to accept or reject the part displayed in the image. In some cases, as shown by a dotted line in FIG. 2, a part that is considered not to have a defect may be displayed to the operator, for example, as a periodic or random check on the operation of the system, for training, or for other purposes. At 250, the system receives the operator's decision or determination and the part is processed accordingly. At 260, the operator's decision, image data and other data relevant or related to the part may be saved in the database. The method can then be restarted for the next part.” Wardell, para. [0053], fig. 2)
Hyatt, Tanaka, and Wardell are combinable because they are from the same filed of endeavor, image processing.
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Hyatt and Tanaka in light of Wardell’s showing progress. One would have been motivated to do so because it may improve the efficiency of the whole system. (Wardell, para. [0079])
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARDIS SOHRABY whose telephone number is (571)270-0809. The examiner can normally be reached Monday - Friday 9 am till 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PARDIS SOHRABY/ Examiner, Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664