Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Pursuant to communications filed on 11/11/2024, this is a First Action Non-Final Rejection on the Merits wherein claims 1-10 are currently pending in the instant application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/11/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner.
Examiner's Note
Examiner has cited particular paragraphs and/or columns / lines numbers or figures in the reference(s) as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Examiner has also cited references in PTO-892 but not relied on, which are relevant and pertinent to the applicant’s disclosure, and may also be reading (anticipatory/obvious) on the claims and claimed limitations. Applicant is advised to consider the references in preparing the response/amendments in-order to expedite the prosecution.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 5, 6, 7, and 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fukumoto (JP2020-055059 – from IDS).
Regarding claims 1 and 6, Fukumoto discloses a controller (fig. 4: control device 70) / the associated detection system (fig. 1: camera 53 for picking up an image of the work W), comprising:
PNG
media_image1.png
784
772
media_image1.png
Greyscale
an image acquisition unit (fig. 4: via control device 70) configured to acquire, from a visual sensor (fig 1: camera 53 for picking up an image of the work W), a plurality of images of workpieces successively captured by the visual sensor (see figure 7A-B-C below; see [0020] disclosing control device 70 performs image processing on the image G captured by the camera 53, and changes the images G (i-2), G (i-1), and G (i) that are three frame images continuous in time series. See [0015] The camera 53 can capture an image with each sampling area A2 as an imaging range, and captures not only a still image but also a plurality of images (moving images) that are continuous in time series at a predetermined frame rate (tens of fps, etc.).); and
PNG
media_image2.png
568
504
media_image2.png
Greyscale
PNG
media_image3.png
760
558
media_image3.png
Greyscale
a target determination unit (fig. 4: via control device 70) configured to determine a workpiece in a stationary state as a target for work by an industrial machine (fig. 1: collection-sampling robot 50), based on the plurality of images (see figures 6-7; see [0008] The work system controls the robot so as to collect the work after determining the stillness of the work on the supply unit based on the convergence of the change amounts of the plurality of images. See [0020] In the sampling placement control routine of FIG. 6, the control device 70 waits for the loosening operation by the vertical movement device 30 to end (S100), and when determining that the loosening operation has finished, the control device 70 determines a target sampling area at a predetermined frame rate. The image G of A2 is captured by the camera 53 (S110). Note that FIG. 7 is an explanatory diagram showing a state in which the work W stands still. The vibration of the work W on the upper surface portion 22a (collection area A2) converges in the order of FIG. 7A, FIG. 7B, and FIG. Show the situation. The control device 70 performs image processing on the image G captured by the camera 53, and changes the images G (i-2), G (i-1), and G (i) that are three frame images continuous in time series.).
PNG
media_image4.png
640
442
media_image4.png
Greyscale
Regarding claims 2 and 7, Fukumoto discloses wherein the target determination unit performs image processing of extracting a moving workpiece, based on the plurality of images, and removing the moving workpiece from a latest image among the plurality of images (see [0024] In the work system 10 of the present embodiment described above, the collection robot 50 is controlled so as to collect the work W after determining the stillness of the work W based on the convergence of the change amounts of the plurality of images G. As a result, it is possible to properly collect the work W while preventing a collection error due to the movement of the work W.),
performs detection processing for detecting a workpiece on the latest image in which the moving workpiece is excluded (see [0022] When the control device 70 determines that the work W is stationary, the control robot 70 starts the collection of the work W by the collection robot 50. Therefore, of the plurality of images G used for the determination in S130, the last captured image G of each work W is extracted. The position is detected (S150). In this way, by detecting the position of the work W from the last image G, it is possible to quickly start the collection of the work W without determining the stillness of the work W and then capturing the image G again. The control device 70 executes the process of collecting the work W by the collection robot 50 and mounting it on the tray T for all the works W in the target collection area A2 (S160), and executes the collection and placement control routine. finish. In this way, since the collection robot 50 starts collecting the work W after determining whether the work W is stationary, a mistake in collecting the work W can be prevented. Note that the control device 70 may be a device that captures an image and confirms the position of the work W as necessary during the process of S160.), and
determines, as the target for the work, a workpiece detected by the detection processing (see [0025] Further, the image G captured last among the plurality of images G used for determining the stillness of the work W is used for detecting the position of the work W. Therefore, when it is determined that the work W is stationary, the work W can be collected without newly capturing an image, and thus the work W can be collected more efficiently. Further, since the stillness of the work W is determined from the change amount of the three images G (i-2), G (i-1), and G (i) that are continuous in time series, the stillness of the work W is erroneously determined. Can be prevented and the determination accuracy can be improved.).
Regarding claims 5 and 10, Fukumoto discloses wherein the target determination unit performs detection processing for detecting a workpiece on each of the plurality of images (see [0022] When the control device 70 determines that the work W is stationary, the control robot 70 starts the collection of the work W by the collection robot 50. Therefore, of the plurality of images G used for the determination in S130, the last captured image G of each work W is extracted. The position is detected (S150). In this way, by detecting the position of the work W from the last image G, it is possible to quickly start the collection of the work W without determining the stillness of the work W and then capturing the image G again. The control device 70 executes the process of collecting the work W by the collection robot 50 and mounting it on the tray T for all the works W in the target collection area A2 (S160), and executes the collection and placement control routine. finish. In this way, since the collection robot 50 starts collecting the work W after determining whether the work W is stationary, a mistake in collecting the work W can be prevented. Note that the control device 70 may be a device that captures an image and confirms the position of the work W as necessary during the process of S160.), and
extracts the workpiece in the stationary state, based on a position of a workpiece detected in the detection processing for each of the plurality of images, and determines the extracted workpiece as the target for the work (see [0025] Further, the image G captured last among the plurality of images G used for determining the stillness of the work W is used for detecting the position of the work W. Therefore, when it is determined that the work W is stationary, the work W can be collected without newly capturing an image, and thus the work W can be collected more efficiently. Further, since the stillness of the work W is determined from the change amount of the three images G (i-2), G (i-1), and G (i) that are continuous in time series, the stillness of the work W is erroneously determined. Can be prevented and the determination accuracy can be improved.).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 3 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Fukumoto in view of Arakawa et al (JP2002-329206 – from IDS), hereinafter “Arakawa”.
Regarding claims 3 and 8, Fukumoto discloses as discussed above in claims 2 and 7). Fukumoto is silent to discloses wherein the image processing includes mask processing of masking a region including the moving workpiece on the latest image, and the target determination unit performs the detection processing on the latest image subjected to the mask processing.
However, in the same field of endeavour or analogous art, Arakawa teaches the claimed features wherein the image processing includes mask processing of masking a region including the moving workpiece on the latest image, and the target determination unit performs the detection processing on the latest image subjected to the mask processing. (See [0012] Next, the operation performed as normal processing will be described. First, an image is captured and input (step SA8). It is stored in the image data buffer 3 (step SA9). The difference between the image captured this time and the image captured one cycle before is binarized (step SA10). A difference image having an image change region in which a changed image portion is extracted is created from the two difference-binarized images. The difference image and the mask area created in step SA7 are compared and collated (step SA11). Matching is performed for each of the mask areas, and if an image change area of the difference image exists within the movable range provided around the mask area, it is determined that the same object has moved. For a moving object, it is judged whether it is moving (step SA12). When the movement is determined, the labeling processing similar to step SA6 is performed on the image change area of the difference image, the mask area is created, and the mask area corresponding to the moving object created in the previous cycle is updated. (Step SA13). In step SA11, if the image change area of the difference image does not exist within the movable range provided around the mask area, it is determined in step SA12 that the moving object corresponding to the mask area is stationary. In that case, the mask area is not updated and is used for the stationary/moving determination in the next cycle. If all mask areas have not been determined (step SA14), the next mask area is selected in order to determine whether the next moving object is stationary/moving (step SA15). If it is determined in step SA14 that the mask area is still/moving, the process proceeds to step SA8 to input the next image in order to move to the next cycle.).
Therefore, it is prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Fukumoto to include the idea of implementing the image masking technique, as taught by Arakawa, for the benefit of having an image processing method to mask areas are formed for the respective moving bodies, and compared and collated with image variation areas of difference images, when an image variation area is present in a movable range area including a mask area and a movable range provided around the mask area, it is judged that the image variation area is formed by the movement of a moving body corresponding to the mask area, but when not, it is judged that the moving body corresponding to the mask area is at a stop.
Claims 4 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Fukumoto in view of Streeter et al. (US 20130176445 – from IDS), hereinafter “Streeter”.
Regarding claims 4 and 9, Fukumoto discloses as discussed above in claims 2 and 7). Fukumoto is silent to discloses further comprising a setting unit configured to set a parameter used in the image processing, wherein the parameter includes at least one of a first parameter for setting how many pixels or more a workpiece is moved between successive images among the plurality of images to extract the workpiece as the moving workpiece, and a second parameter for setting a size of a margin region needed to be added to, as a region to be excluded from the latest image, a periphery of a pixel region of the moving workpiece.
However, in the same field of endeavour or analogous art, Streeter teaches the claimed features of a setting unit configured to set a parameter used in the image processing, wherein the parameter includes at least one of a first parameter for setting how many pixels or more a workpiece is moved between successive images among the plurality of images to extract the workpiece as the moving workpiece, and a second parameter for setting a size of a margin region needed to be added to, as a region to be excluded from the latest image, a periphery of a pixel region of the moving workpiece (See [0105] In FIG. 17 the train is moving to the left. In FIG. 18, the bottle is moving to the right and in a circular motion towards and away from the camera. In FIG. 19, one hand is moving to the left and the other is moving down. The optical flow method used provides the desired invariance to brightness effects, including illumination variance between frames and the slight uneven illumination in the simple amplitude images. With the optical flow estimation, increasing the smoothness of the flow vector images can lead to more accurate estimation within an object region, but with a tradeoff of less accurate delineation of the object boundaries in the estimated flow. The overspill of flow vectors into the static background area within the region of motion can lead to erroneous motion estimation, which in turn leads to a possible source of error in the motion correction. Sufficiently accurate optical flow estimates can be obtained for any given scene by manually tuning the parameters to appropriate values, but because the actual parameter values required is scene dependent automatic tuning is preferred. – Examiner comment: Streeter suggests the feature, in a processing device which captures an image of a moving object and executes image processing on the captured image, of manually tuning a parameter to an appropriate value in order to appropriately execute the image processing.).
Therefore, it is prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Fukumoto to include the idea of implementing a setting unit for multiple parameters, as taught by Streeter, for the benefit of having a parameter setting unit wherein specific settings of a parameter could be easily selected, as appropriate, by a person skilled in the art in accordance with specifications required for a device.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached form PTO-892.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jaime Figueroa whose telephone number is (571)270-7620. The examiner can normally be reached on Monday-Friday 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached on 571-270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAIME FIGUEROA/Primary Patent Examiner, Art Unit 3656