Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 16 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 7-8, 11-12, 14-15, 17 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gurevich; Vladimir (US 11108946 B1, hereinafter “Gurevich”).
Regarding claim 14, Gurevich teaches a focus tuning imaging system (Fig. 1: an AF imaging system) comprising:
a tunable optical element (Fig. 1, Col. 3, lines 66-67 : variable focus (VF) lenses);
a controller in communication with the tunable optical element, the controller configured to control a focus of the tunable optical element (Fig. 1, Col. 3, lines 66-67 : Electrically controlled variable focus (VF) lenses);
an imaging sensor configured to capture images of a field of view of the imaging sensor (Fig. 1, Col. 7, lines 10-26 & claim 13: an imaging sensor configured to receive an image of the object and to generate an electrical signal indicative of the received image);
and a processor and computer-readable media storage having machine readable instructions stored thereon that, when the machine readable instructions are executed, cause the imaging system (Fig. 1, Col. 17, lines 47-67 & claim 13: a processor and computer-readable media. The computer-readable media stores machine readable instructions that, when executed, cause the focus stabilization imaging system) to:
obtain, by the imaging sensor, a plurality of images of a field of view of the imaging sensor (Figs. 1&4, Col. 10, lines 8-21 & claim 13: obtain, by the imaging system, a first plurality of images of the object of interest, wherein each image of the first plurality of images is obtained by the imaging system at a different optical focus);
identify, by the processor, a reference element in at least two of the images of the plurality of images thereby establishing a set of reference images (Figs. 1&4, Col. 10, lines 8-21 & claim 13: identify, by the processor, a region of interest in at least one image of the first plurality of images, wherein the region of interest contains an indicia indicative of the object of interest);
filter, by the processor, the set of reference images based on an image metric and establishing a filtered set of reference images (Figs. 1&4, Col. 10, lines 22-41 & claim 13: the processor determines a set of image quality values from the analyzed images. Each of the set of image quality values corresponds to a respective analyzed image in the first plurality of images. For example, at 412, the processor may analyze each of the first plurality of images by performing image processing on each of the plurality of images. The image processing may include applying a spatial lowpass filter, spatial highpass filter, Fourier lowpass or highpass filter, performing a noise reduction, a scaling, rotation, shearing, reflection, or another image filtering or image processing technique.);
identify, by the processor and from the set of filtered reference images, a plurality of focus values of the imaging system, each focus value corresponding to a respective reference image of the filtered set of reference images (Figs. 1&4-5, Col. 10, lines 55-67 & claim 13: the processor determining, at 416, a current alignment parameter of the imaging reader. The current alignment parameter is determined from at least one of the image quality values of the set of image quality values. the set of image quality values may include sharpness values S1, S2, and S3, corresponding to the first image (near focus), second image (middle focus), and third image (far focus), respectively, as illustrated in FIG. 5A));
determine, by the processor, a focus trend from the plurality of focus values (Figs. 1&4-5, Col. 11, lines 32-59 & claim 13: the calibration parameter value may be a sharpness ratio value of 1±0.05, 1±0.1, 1±0.2, 1±0.4, 1±0.6, or another sharpness ratio value and tolerance. A deviation of the sharpness ratio from a value of 1, or a tolerance range thereof, indicates that the focus of the imaging system has shifted or drifted over time, and that the VF optical element should be tuned to compensate for the deviation. The controller tunes the VF lens according to the determined tuning parameter. The controller uses the tuning parameter to control a VF lens to reduce any focal distance drift of the VF lens, and to maintain focus at, or approximately at the middle focus plane.;
determine, by the processor, a focus compensation value from the plurality of focus values or the focus trend (Figs. 1&4-5, Col. 11, lines 32-59 & claim 13: At 418, the processor of the imaging reader 106 determines a tuning parameter by comparing the current alignment parameter with the calibration parameter obtained at 404.);
and tune, by the controller configured to control the focal length of a tunable optical element of the imaging sensor, the focal length of the tunable optical element according to the focus compensation value (Figs. 1&4-5, Col. 4, lines 44-56 & claim 13: The controller tunes the VF lens according to the determined tuning parameter. The controller uses the tuning parameter to control a VF lens to reduce any focal distance drift of the VF lens, and to maintain focus at, or approximately at the middle focus plane. In embodiments, a range of focus values that are within a tolerance of an optimal focal value may be acceptable for decoding of indicia and identification of an OOI. For example, a middle focal plane may be at a focal distance of 2±1 cm, 4±2 cm, 5±3 cm, 8±4 cm, 10±5 cm, or another focal distance having a focal distance tolerance.).
Regarding claim 15, Gurevich teaches the system of claim 14, in addition Gurevich discloses wherein the machine readable instruction further cause the system to store, in a memory, data indicative of the focus trend (Figs. 1&4-5, Col. 17, lines 13-17 & claim 13: the calibration parameter value are stored in a memory, network, or other storage medium to use at a later time or by other systems for performing focus stabilization).
Regarding claim 17, Gurevich teaches the system of claim 14, in addition Gurevich discloses wherein the tunable optical element comprises a liquid lens or an electrically tunable lens (Figs. 1&4-5, Col. 7, lines 40-48 & claim 22: The VF optical element 208 may be a deformable lens element, a liquid lens, a T-lens or another VF optical element.).
Regarding claim 20, Gurevich teaches the system of claim 14, in addition Gurevich discloses wherein to filter the set of reference images based on an image metric the machine readable instructions further cause the system to: determine, by the processor, an image metric value for each image of the set of reference images (Figs. 1&4-5, Col. 10, lines 22-41 & claim 13: the processor determines a set of image quality values from the analyzed images. Each of the set of image quality values corresponds to a respective analyzed image in the first plurality of images.);
and determine, by the processor, the filtered set of reference images as images of the set of reference images with a respective image metric value (Figs. 1&4-5, Col. 10, lines 22-41 & claim 13: the processor may analyze each of the first plurality of images by performing image processing on each of the plurality of images. The image processing may include applying a spatial lowpass filter, spatial highpass filter, Fourier lowpass or highpass filter, performing a noise reduction, a scaling, rotation, shearing, reflection, or another image filtering or image processing technique.) within an image metric threshold (Figs. 1&4-5, Col. 15, lines 1-20 & claim 22: the set of image quality values may include sharpness values S1, S2, and S3, corresponding to the first image (near focus), second image (middle focus), and third image (far focus), respectively, as illustrated in FIG. 5A). In such an embodiment, the second images sharpness value S2 may be above the sharpness threshold, Smin, for decoding of indicia in the image, and therefore the second image may be the preferred image for analyzing and decoding of indicia in the region of interest.).
Regarding claim 7, Gurevich teaches the method of claim 1, in addition Gurevich discloses wherein filtering the set of reference images based on an image metric comprises: determining, by the processor, an image metric value for each image of the set of reference images(Figs. 1&4-5, Col. 10, lines 22-41 & claim 13: the processor determines a set of image quality values from the analyzed images. Each of the set of image quality values corresponds to a respective analyzed image in the first plurality of images.) and determining, by the processor, the filtered set of reference images as images of the set of reference images with a respective image metric value (Figs. 1&4-5, Col. 10, lines 22-41 & claim 13: the processor may analyze each of the first plurality of images by performing image processing on each of the plurality of images. The image processing may include applying a spatial lowpass filter, spatial highpass filter, Fourier lowpass or highpass filter, performing a noise reduction, a scaling, rotation, shearing, reflection, or another image filtering or image processing technique.) within an image metric threshold (Figs. 1&4-5, Col. 15, lines 1-20 & claim 22: the set of image quality values may include sharpness values S1, S2, and S3, corresponding to the first image (near focus), second image (middle focus), and third image (far focus), respectively, as illustrated in FIG. 5A). In such an embodiment, the second images sharpness value S2 may be above the sharpness threshold, Smin, for decoding of indicia in the image, and therefore the second image may be the preferred image for analyzing and decoding of indicia in the region of interest.).
Regarding claim 8, Gurevich teaches the method of claim 7, in addition Gurevich discloses wherein the image metric value comprises at least one of a sharpness value, a contrast value, a normalized sharpness value, a resolution, a pixels per module value, a modulation transfer function, and a spatial frequency content value (Figs. 1&4-5, Col. 10, lines 22-41: the image quality values for each of the first plurality of images may include determining a sharpness value, a contrast value, a spatial frequency content value, a noise measurement value, a dynamic range value, a measurement of image distortion, a blur value, or another value associated with an image or image quality.).
Regarding claim 11, Gurevich teaches the method of claim 1, in addition Gurevich discloses wherein the reference element comprises at least one of a 1D barcode, 2D barcode, static QR code, dynamic QR code, UPC code, a predefined custom pattern, alphanumeric identifier, a feature having a spatial frequency content of greater than of greater than a 2 mil barcode or 2 pixels per module, or an element with a plurality of different sized features at different focuses of the imaging system (Figs. 1&4-5, Col. 13, lines 1-4: the scanning parameters include the types of indicia on the targets, such as whether the targets contain 1D or 2D barcodes, QR codes, UPC codes, or other identifying indicia.).
Regarding claim 12, Gurevich teaches the method of claim 1, in addition Gurevich discloses wherein the reference element comprises at least one of text, electrical traces on a circuit board, one or more electrical components, grids on a surface, a pattern on a surface, predefined fiducial marks, or an outline of an object of interest (Figs. 1&4-5, Col. 10, lines 8-21: The controller, at 410, identifies a region of interest in at least one of the captured images. The region of interest in the at least one image may include a barcode, a serial number, alphanumeric, a graphic, or another indicia indicative of the target or edges of objects of interests (OOI)).
Regarding claims 1-3 and 4, As per claims 1, 2, 3 and 4, system claims 14, 15, 17 and 17 and method claims 1, 2, 3 and 4 are related as apparatus and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claims 1, 2, 3 and 4 are similarly rejected under the same rationale as applied above with respect to system claims 14, 15, 17 and 17.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Gurevich; Vladimir (US 11108946 B1, hereinafter “Gurevich”), in view of Bailey; Michael (US 20100172573 A1, hereinafter “Bailey”).
Regarding claim 6, Gurevich teaches the method of claim 1, in addition Gurevich discloses wherein identifying a reference element in at least two of the images of the plurality of images (Figs. 1&4, Col. 10, lines 8-21 & claim 13: identify, by the processor, a region of interest in at least one image of the first plurality of images, wherein the region of interest contains an indicia indicative of the object of interest) comprises: except providing, via a user interface, one or more images of the plurality of images to a user; receiving, via the user interface, a selection of a region of interest in one or more images of the plurality of images; and identifying, by the processor, the reference element in the region of interest.
However, Bailey discloses providing, via a user interface, one or more images of the plurality of images to a user; receiving, via the user interface, a selection of a region of interest in one or more images of the plurality of images; and identifying, by the processor, the reference element in the region of interest ([0103]: In 604, an indication of a region of interest may be received, where the region of interest includes one of the one or more lit areas. In other words, a portion of the image may be specified or indicated, e.g., via user input or programmatically. For example, in one embodiment, a user may specify the region of interest (ROI) by dragging a mouse over a display of the image. In another embodiment, an image analysis program may be used to programmatically detect and localize or indicate the ROI. The particular manner in which the ROI is indicated is not important, i.e., any approach may be used as desired.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate “providing, via a user interface, one or more images of the plurality of images to a user; receiving, via the user interface, a selection of a region of interest in one or more images of the plurality of images; and identifying, by the processor, the reference element in the region of interest” as taught by Bailey into Gurevich focus stabilization method. The suggestion/ motivation for doing so would be to allow an increase interaction between a user and imaging system.
Regarding claim 19, Gurevich teaches the system of claim 14, in addition Gurevich discloses wherein to identify a reference element in at least two of the images of the plurality of images (Figs. 1&4, Col. 10, lines 8-21 & claim 13: identify, by the processor, a region of interest in at least one image of the first plurality of images, wherein the region of interest contains an indicia indicative of the object of interest), except the computer readable instructions further cause the system to: provide, via a user interface, one or more images of the plurality of images to a user; receive, via the user interface, a selection of a region of interest in one or more images of the plurality of images; and identify, by the processor, the reference element in the region of interest.
However, Bailey discloses provide, via a user interface, one or more images of the plurality of images to a user; receive, via the user interface, a selection of a region of interest in one or more images of the plurality of images; and identify, by the processor, the reference element in the region of interest ([0103]: In 604, an indication of a region of interest may be received, where the region of interest includes one of the one or more lit areas. In other words, a portion of the image may be specified or indicated, e.g., via user input or programmatically. For example, in one embodiment, a user may specify the region of interest (ROI) by dragging a mouse over a display of the image. In another embodiment, an image analysis program may be used to programmatically detect and localize or indicate the ROI. The particular manner in which the ROI is indicated is not important, i.e., any approach may be used as desired.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate “provide, via a user interface, one or more images of the plurality of images to a user; receive, via the user interface, a selection of a region of interest in one or more images of the plurality of images; and identify, by the processor, the reference element in the region of interest” as taught by Bailey into Gurevich focus stabilization method. The suggestion/ motivation for doing so would be to allow an increase interaction between a user and imaging system.
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Gurevich; Vladimir (US 11108946 B1, hereinafter “Gurevich”), in view of Vinogradov et al. (US 20210294071 A1, hereinafter “Vinogradov”).
Regarding claim 9, Gurevich teaches the method of claim 1, in addition Gurevich discloses further comprising: images Figs. 1&4-5, Col. 10, lines 8-21: The controller, at 410, identifies a region of interest in at least one of the captured images. The region of interest in the at least one image may include a barcode, a serial number, alphanumeric, a graphic, or another indicia indicative of the target or OOI.).
Gurevich does not teach receiving, via a user interface, from a user an input indicative of a condition for performing focus tuning.
However, Vinogradov discloses receiving, via a user interface, from a user an input indicative of a condition for performing focus tuning ([0027]-[0028]&[0062]: lens behavior model is used by the process 204 to determine what the actual, desired focus position should be given changes in behavioral conditions of the lens. These changes can be a change in temperature, a change in age/time, and/or a change in drift position. The changes may result from a comparison of a current state of the machine vision system, such as its current temperature or current age/time, in comparison to the baseline state, such as an initial startup of the machine vision system, a first time use of the machine vision system, a factory released state of the machine vision system, or some other baseline state.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate “provide, via a user interface, one or more images of the plurality of images to a user; receive, via the user interface, a selection of a region of interest in one or more images of the plurality of images; and identify, by the processor, the reference element in the region of interest” as taught by Vinogradov into Gurevich focus stabilization method. The suggestion/ motivation for doing so would be to track the ageing of a liquid lens and the drift of liquid lens over time and over temperature and compensate for the same (Vinogradov: [0002]).
Regarding claim 10, Gurevich and Vinogradov combination teaches the method of claim 10, in addition Vinogradov discloses wherein the condition includes at least one of a temporal frequency of performing focus tuning, a number of scanning operations, a lens temperature fluctuation, environmental temperature fluctuation, or aging of a lens ([0027]-[0028]&[0062]: lens behavior model is used by the process 204 to determine what the actual, desired focus position should be given changes in behavioral conditions of the lens. These changes can be a change in temperature, a change in age/time, and/or a change in drift position. The changes may result from a comparison of a current state of the machine vision system, such as its current temperature or current age/time, in comparison to the baseline state, such as an initial startup of the machine vision system, a first time use of the machine vision system, a factory released state of the machine vision system, or some other baseline state.).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Gurevich; Vladimir (US 11108946 B1, hereinafter “Gurevich”), in view of Islam et al. (US 10373336 B1, hereinafter “Islam”).
Regarding claim 13, Gurevich teaches the method of claim 1, except wherein the reference element is selectively removable from the imaging system.
However, Islam discloses wherein the reference element is selectively removable from the imaging system (as illustrated in Figs. 1B&5B-6, Col. 8 lines 37-46 and Col. 17 lines 25-33 : a control circuit 111 may control the robot arm to move the calibration pattern to the plurality of locations. the robot control system 110 may control both cameras 170, 180 to capture respective images of the calibration pattern 160 in order to perform camera calibration. The calibration pattern 160 may be permanently disposed on the robot 150, or may be a separate component that can be attached to and detached from the robot 150).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate “wherein the reference element is selectively removable from the imaging system” as taught by Islam into Gurevich focus stabilization method. The suggestion/ motivation for doing so would be to allow a calibration pattern to be moved to a plurality of locations that are distributed within a camera field of view of the camera. (Islam: Col. 14 lines 37-53).
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Gurevich; Vladimir (US 11108946 B1, hereinafter “Gurevich”), in view of Hagino; Yoshio (US 20060044452 A1, hereinafter “Hagino”).
Regarding claim 18, Gurevich teaches the system of claim 14, in addition Gurevich discloses wherein the machine readable instructions further cause the system to: determine, by the processor, that a focal distance of the imaging system has exceeded a focus threshold value (Figs. 1&4-5, Col. 15, lines 1-20: the second images sharpness value S2 to be above the sharpness threshold, Smin, for decoding of indicia in the image, and therefore the second image is the preferred image for analyzing and decoding of indicia in the region of interest.); and provide, to a user, an indication Figs. 1&4-5, Col. 6, lines 32-36: The imaging reader 106 further includes a display for providing information such as visual indicators, instructions, data, and images to a user.).
However, Hagino discloses wherein the indication is an indication that the focal distance has exceeded the threshold value (as illustrated by Figs. 4, [0056]: a focus state is indicated by a number of rectangular strip forms as shown by the focus state indication 41. When the focus state is an in-focus state, the in-focus state is indicated by blinking of the rectangular strip forms as shown by in-focus indication 43. When the focus state is a state where the measurement is impossible, the indication is displayed as shown by a display of impossible measurement indication 42.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate “wherein the indication is an indication that the focal distance has exceeded the threshold value” as taught by Hagino into Gurevich focus stabilization method. The suggestion/ motivation for doing so would be to make it possible for a user to judge the focus state of a camera easily and enables easy confirmation and adjustment of the focus (Hagino: [0060]).
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELAAZIZ TISSIRE whose telephone number is (571)270-7204. The examiner can normally be reached on Monday through Friday from 8 AM to 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ye Lin can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDELAAZIZ TISSIRE/ Primary Examiner, Art Unit 2638