DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments and amendments received December 31, 2025 have been fully considered. with regard to 35 U.S.C. § 102, Applicant argues that the cited prior art does not disclose “see applicant argument pages 10-14”. This language corresponds to claims 1-19 and 21.
As such, these have been considered but they are not persuasive as addressed below. See the rejection how the art on record reads on the claimed invention as well as the examiner's interpretation of the cited art in view of the presented claim set as outlined below. Furthermore, in response to applicant argument, Moore teaches: a user may define one or more parameters to cause the image system 113 to switch between the various sources to capture images of different portions of the surface 119 of the component 109, in an embodiment. The one or more parameters may include for example, an image-capture-quantity parameter, which may be defined and activated by a user to cause the image system 113 to switch between sources following the capture of a defined number of images by any one source..—0024.
Therefore, the examiner stands with the rejection since system of Moore operate under automated system identifying during a trigger object detection and the system automatically captures defined number of images as outlined at least in para. 0024.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-7, 9-19 and 21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Moore et al. US 2003/0146285.
In regarding to claim 1 Moore teaches:
1. A method of controlling a capture of one or more digital image frames of a product on a conveyor line, the method performed by a manufacturing control device,
Moore, Fig. 1
the manufacturing control device being in communication with at least one camera, the method comprising: receiving, by the manufacturing control device, a product position trigger signal indicating that the product has entered an imaging zone of the conveyor line;
[0023] In one embodiment, the triggering device 107 may comprise an optical sensor which transmits and detects a reflected beam (not shown) for example, and which identifies the presence of an object (e.g., the component 109) at a location on the conveyor belt 103 via an interference with the reflected beam. In response to the trigger signal communicated from the triggering device 107, the image system 113 may capture multiple images of at least a portion of a surface 119 of the component 109. The multiple images may then be stored and processed to identify and read any symbol codes (e.g., symbol codes 111a and 111b) affixed to the surface 119 of the component 109 to enable tracking or identification of the component 109, and to ensure that acceptable identifying information has been affixed to the component 109 via a matrix code or the like, as desired by the user.
Moore, 0005, 0023 and fig. 1, emphasis added
determining, by the manufacturing control device, a shutter trigger sequence for the at least one camera such that at least a predetermined number of the digital image frames are captured while the product is within the imaging zone of the conveyor line;
[0023] In one embodiment, the triggering device 107 may comprise an optical sensor which transmits and detects a reflected beam (not shown) for example, and which identifies the presence of an object (e.g., the component 109) at a location on the conveyor belt 103 via an interference with the reflected beam. In response to the trigger signal communicated from the triggering device 107, the image system 113 may capture multiple images of at least a portion of a surface 119 of the component 109. The multiple images may then be stored and processed to identify and read any symbol codes (e.g., symbol codes 111a and 111b) affixed to the surface 119 of the component 109 to enable tracking or identification of the component 109, and to ensure that acceptable identifying information has been affixed to the component 109 via a matrix code or the like, as desired by the user.
Moore, 0005, 0023-0024 and fig. 1, emphasis added
and controlling, by the manufacturing control device, the capture of at least the predetermined number of the digital image frames via the at least one camera using the determined shutter trigger sequence upon receipt of the product position signal.
[0024] In various embodiments in accordance with the teachings of the present invention, the multiple images of the surface 119 of the component 109 may be captured via any one of a number of sources, such as an internal image sensor of the image system 113, as will be discussed in greater detail below, via the external camera 115, or via other sources coupled to the image system 113. In addition, a user may define one or more parameters to cause the image system 113 to switch between the various sources to capture images of different portions of the surface 119 of the component 109, in an embodiment. The one or more parameters may include for example, an image-capture-quantity parameter, which may be defined and activated by a user to cause the image system 113 to switch between sources following the capture of a defined number of images by any one source. Another of the one or more parameters may comprise a time parameter, which also may be defined and activated by the user to cause the image system 113 to switch between sources after a defined period of time has elapsed.
Moore, 0005, 0023-0024 and fig. 1, emphasis added
In regarding to claim 2 Moore teaches:
2. The method of claim 1, wherein the determining the shutter trigger sequence for the at least one camera further comprises: determining, by the manufacturing control device, an image frame duration for each of the predetermined number of the digital image frames.
[0025] In other embodiments, the user may define an interval of time to elapse between each image capture, regardless of source, to adjust the effective field of view of the image system 113 in the automated identification system 101. In embodiments in accordance with the teachings of the present invention, the interval may comprise an identical period of time between pairs of successive image captures, or may vary with each successive pair of image captures. In one embodiment, the interval may be defined as zero to cause a continuous image capture limited only by the capture rate of the source.
Moore, 0005, 0025, emphasis added
In regarding to claim 3 Moore teaches:
3. The method of claim 1, wherein the determining the shutter trigger sequence for the at least one camera is based, at least in part, on at least one image frame rate of the at least one camera.
[0025] In other embodiments, the user may define an interval of time to elapse between each image capture, regardless of source, to adjust the effective field of view of the image system 113 in the automated identification system 101. In embodiments in accordance with the teachings of the present invention, the interval may comprise an identical period of time between pairs of successive image captures, or may vary with each successive pair of image captures. In one embodiment, the interval may be defined as zero to cause a continuous image capture limited only by the capture rate of the source.
Moore, 0025, emphasis added
In regarding to claim 4 Moore teaches:
4. The method of claim 1, wherein the at least one camera comprises a fixed focus lens.
[0027] With reference now primarily to FIG. 2, a block diagram illustrating one embodiment of an apparatus 201 that may be used for the image system 113 is shown in accordance with the teachings of the present invention. In the illustrated embodiment, the apparatus 201 includes an illumination element 203, which may comprise a plurality of light emitting diodes ("LED"), or the like, to illuminate the surface 119 of the component 109 (see, e.g., FIG. 1) to enable images to be captured. The apparatus 201 also includes a lens 205 for collecting and focusing light onto an internal image sensor 207, which may comprise a CCD, a CMOS image sensor, or other suitable device, in various embodiments.
Moore, 0006, 0027, emphasis added.
In regarding to claim 5 Moore teaches:
5. The method of claim 4, wherein the product is disposed on the conveyor line in at least one of: a vertical orientation, a horizontal orientation, or an intermediate orientation, the method further comprising: receiving, by the manufacturing control device, a conveyor line speed, wherein the determining the shutter trigger sequence for the at least one camera is based, at least in part, on the conveyor line speed.
[0042] FIG. 13 shows a pair of captured images 1301 and 1303 that may represent completely distinct portions of the surface 119 of the component 109 moving through the automated identification system 101 (see, e.g., FIG. 1). The captured images 1301 and 1303 may be completely distinct either because of the duration of the user-specified interval implemented between the successive images 1301 and 1303, as discussed above, or based on the speed of the conveyor belt 103 (see, e.g., FIG. 1), or the like, or a combination of such factors. In any event, consideration of the speed at which components are moving through the automated identification system 101 (FIG. 1) and/or the user-specified interval between successive image captures, enables the user to capture slightly overlapping images, such as those illustrated in FIG. 14. Overlapping the image captures increases image-processing quality by helping to ensure that any symbol codes (e.g., the symbol codes 111a and 111b, FIG. 1) affixed to the surface 119 of the component 109 (FIG. 1) will be captured in an image to permit processing and accurate identification and reading thereof. This enables the user of the automated identification system 101 (FIG. 1) to rely less on the setup of the image system in relation to the position of the triggering device, and/or timing issues related to the speed of the components (e.g., the component 109) to accurately identify and read symbol codes (e.g., the symbol codes 111a and 11b, FIG. 1).
Moore, 0005-0006, 0042 and Fig. 1, emphasis added.
In regarding to claim 6 Moore teaches:
6. The method of claim 4, wherein the at least one camera further comprises a liquid lens, the liquid lens having a focus rise time, the method further comprising: determining, by the manufacturing control device, a focus trigger sequence for the at least one camera such that the at least predetermined number of the digital image frames are captured at one or more focus points while the product is within the imaging zone of the conveyor line, wherein the determining the shutter trigger sequence for the at least one camera is based, at least in part, on the focus rise time.
Moore, 0027, 0031-0033
In regarding to claim 7 Moore teaches:
7. The method of claim 1, wherein the product is disposed on the conveyor line in at least one of: a vertical orientation, a horizontal orientation, or an intermediate orientation, and the conveyor line is stopped during the capture of the at least predetermined number of the digital image frames via the at least one camera.
Moore, 0042
In regarding to claim 9 Moore teaches:
9. The method of claim 1, wherein the captured digital image frames are at least one of: one or more digital still images, or one or more digital image frames of a digital video stream.
Moore, 0044
In regarding to claim 10 Moore teaches:
10. The method of claim 1, wherein the determined the shutter trigger sequence controls the at least one camera such that at least some of the captured digital image frames each represent at least one of: a respectively different focused depth profile of the product, or a respectively different focused region of the product.
Moore, 0007
Claims 11-18 list all similar elements of claims 1-7 and 10, but in device form rather than method form. Therefore, the supporting rationale of the rejection to claims 1-7 and 10 applies equally as well to claims 11-18.
In regarding to claim 21 Moore teaches:
21. (New) The method of claim 1, further comprising controlling, by the manufacturing control device, a process parameter of the manufacturing control device such that at least said predetermined number of the digital image frames are captured while the product is within the imaging zone of the conveyor line.
Moore, 0005, 0023-0024 and fig. 1
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Moore et al. US 2003/0146285 as applied to claims 1-7 above, and further in view of Boucherie US 6,315,103.
In regarding to claim 8 Moore teaches:
8. The method of claim 1, however, Moore fails to explicitly teach, but Boucheric teaches: wherein the product is at least one of: a toothbrush, a hairbrush, or a non-round hairbrush.
(18) By means of this camera 18, it is determined in which manner the toothbrush bodies 3 are situated on the transport conveyor 8. The toothbrush bodies 3 with a position fulfilling certain criteria are taken up by the manipulator 5 and presented to the supply means 2. Hereby, the transport conveyor 8 preferably is brought to a standstill each time when a toothbrush body 3 to be removed has arrived under the manipulator 5.
Boucheric, col. 5, lines 8-45, emphasis added.
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Boucheric with the system of Moore in order wherein the product is at least one of: a toothbrush, a hairbrush, or a non-round hairbrush, as such, detecting the position of the supplied toothbrush bodies by a visual recognition system; controlling a sorting element in function of the detections performed by the recognition system; and separating, by this sorting element, at least a number of the toothbrush bodies, as well as arranging them at least partially ordered in discharge means..—Abstract.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL T TEKLE/Primary Examiner, Art Unit 2481