DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 1 is objected to because of the following informalities:
Claim 1 recites the limitation "the epipoloar" in 29. There is insufficient antecedent basis for this limitation in the claim. This seems to be a typographical error and should just read –epipolar--.
Claim 5 recites the limitation "the epipoloar" in 11. There is insufficient antecedent basis for this limitation in the claim. This seems to be a typographical error and should just read –epipolar--.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 14 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As to claim 14, the examiner finds it unclear what limitation is attempted to be required by simply listing a multitude of use environments for the method of claim 9. The claim fails to add any clear new step to the method of claim 9. The claim amounts to what be considered as a “Use” claim, as such it is unclear what actual steps are delimiting how the methos is actually practiced. Please see MPEP 2173.05(q). For examination purposes the examiner is interpreting that the detector merely need be capable of the intended use, and since no step or structural limitation is actually claimed essentially any prior art showing the same claimed method/detector is capable of the claimed uses.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 9-11 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (U.S. PGPub No. 2019/0304121 A1) in view of Hillebrand et al. (U.S. PGPub No. 2016/0073091 A1).
As to claim 1, Wang discloses and shows in figures 3a-b, 4a-c and Fig. 10, a method for calibrating at least one camera and at least one projector (LED, disclosed not shown) of a detector (camera, disclosed not shown), wherein the projector is configured for illuminating at least one object (GI tract/test cylinder) with at least one illumination pattern (i.e. dots shown in figure 4b-c) comprising a plurality of illumination features (i.e. dots), wherein the camera has at least one sensor element having a matrix (CCD with e.g. 320x240 pixels) of optical sensors, the optical sensors each having a light-sensitive area, wherein each optical sensor is designed to generate at least one sensor signal in response to an illumination of its respective light-sensitive area by a light beam propagating from the object to the camera, wherein the method comprises the following steps ([0039], ll. 3-9; [0044], ll. 1-7; [0051], ll. 1-5):
a) at at least one first predefined distance (Z1) of the object (shown as 250 in figure 2, defined as “object” as shown in figure 4a-c), illuminating the object with the illumination pattern by using the projector, imaging by using the camera at least one first reflection image comprising a plurality of first reflection features generated by the object in response to illumination by the illumination features (Fig. 4B, [0050], ll. 1-12; [0051], ll. 1-3);
b) at at least one second predefined distance (Z2) of the object different from the first distance, illuminating the object with the illumination pattern by using the projector, imaging by using the camera at least one second reflection image comprising a plurality of second reflection features generated by the object in response to illumination by the illumination features (Fig. 4C; [0051], ll.1-12; [0051], ll. 3-5);
c) evaluating the first reflection image (Fig. 4b) and the second reflection image (Fig. 4c) by using at least one evaluation device of the detector (via the processor disclose) , wherein the evaluation comprises ([0013], ll. 4-12); [0051]; [0076], ll. 19-25);
c1) matching the first reflection features (dots in figure 4b, one of which is labeled as p1) and the second reflection features (dots in figure 4c, one of which is labeled as p2) considering the first and second predefined distances thereby determining pairs of matched first and second reflection features ([0050]); and
c2) determining an epipolar line (line drawn between dots p1 and p2 in figure 4d) for each of the pairs of the matched first and second reflection features, wherein the respective matched first and second reflection features lie on the epipoloar line ([0013], ll. 4-12; [0051]);
Wang discloses where the epipolar lines are used for “projector geometry calibration” and camera calibration but does not explicitly disclose what type of geometric calibration in ([0049]; [0052]).
Wang does not explicitly disclose d) determining an alignment information of the camera and the projector using the determined epipolar lines by using the evaluation device, wherein the alignment information comprises translation and/or rotation between the camera and the projector or e) comparing the alignment information to at least one predefined nominal value for translation and/or rotation by using the evaluation device thereby determining a correction for translation and/or rotation or f) correcting an alignment of the projector and the camera in case the alignment information exceeds the predefined nominal value by more than at least one predefined tolerance range.
However, Hillebrand does disclose and show in figures 14-16 and in ([0090]-[0092]; [0095]) the use of epipolar lines of cameras/projectors as a means by which one can align a camera to a projector via translation or rotation (e.g. pitch, yaw, roll) and do so via a calibration relative to a known expected “predetermined threshold” (i.e. tolerance range) from a predefined nominal value (i.e. deviation).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Wang with an alignment information of the camera and the projector using the determined epipolar lines by using the evaluation device, wherein the alignment information comprises translation and/or rotation between the camera and the projector or e) comparing the alignment information to at least one predefined nominal value for translation and/or rotation by using the evaluation device thereby determining a correction for translation and/or rotation or f) correcting an alignment of the projector and the camera in case the alignment information exceeds the predefined nominal value by more than at least one predefined tolerance range in order to provide the advantage of increased efficiency and accuracy, obviously calibration in general of alignment between a projector/camera produces a more accurate output during use, but further as explicitly disclosed in Hillebrand the auto-calibration technique using epipolar lines provides a rapid and highly repeatable method by which to continually confirm the alignment and working accuracy of a camera/projector pair ([0080]).
As to claim 2, Wang does not disclose a method, wherein the method comprises at least one verification step, wherein the verification step comprises repeating steps a) to e) for verifying if the correction is successful such that the alignment information correspond to the predefined nominal value at least one within the predefined tolerance range.
However, Hillebrand does disclose in ([0076]) the concept of auto-calibration which is designed to continually update the calibration variables in case of varying inconsistencies such as thermal instability. Obviously this known concept of re-calibration is a function of repeating the steps claimed above to assure the alignment nominal value is within the predefined tolerance.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Wang with a method comprises at least one verification step, wherein the verification step comprises repeating steps a) to e) for verifying if the correction is successful such that the alignment information correspond to the predefined nominal value at least one within the predefined tolerance range in order to provide the advantage of increased accuracy, as obviously re-enforcing a calibration routine ensure the system under inspection is always maintained at the highest level of alignment.
As to claim 3, Wang as modified by Hillebrand discloses a method, wherein the method comprises determining the alignment information by evaluating one or more of tilt, shift, or distortion of the epipolar lines ([0092], the same modification and motivation listed above applies here, as roll, yaw and pitch are being interpreted as forms of shifting alignment).
As to claim 4, Wang disclose a method, wherein the method comprises identifying the first reflection features of the first reflection image and the second reflection features of the second reflection image by using at least one image analysis and/or image processing algorithm ([0051]-[0051], where figures 4B-C are images, and therefore inherently the method of calibration disclosed in Wang can be interpreted as a form of “image analysis”).
As to claim 5, Wang discloses a method, wherein the method comprises imaging a plurality of reflection images each comprising a plurality of reflection features at predefined distances of the object different from the first and second distance and performing steps c) to f) using the plurality of further reflection images, wherein step c) comprises matching the first reflection features (dots P1), the second reflection features (dots P2) and the further reflection features considering the first, second and further predefined distances thereby determining pairs of matched first, second and further reflection features (as shown 4D-4E, there are many matched lines between pairs P1 and P2), wherein step c) further comprises determining an epipolar line for each of the pairs of the matched first, second and further reflection features, wherein the respective matched first, second and further reflection features lie on the epipoloar line ([0050]-[0051]).
Wang does not explicitly disclose a continual calibration analysis where further reflection images are generated.
However, the examiner takes Office Notice that simply repeating the noted steps of the prior art to continually calibrate the system under test is well within the level of ordinary skill in the art. Obviously in doing so one can maintain an understanding that the projector/camera pair are continually falling within predefined operating tolerances.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Wang with a continual calibration analysis where further reflection images are generated in order to provide the advantage of expected results and increased accuracy as obviously continued calibration of the camera/projector under test ensures it is operating within the most optical conditions.
As to claim 6, Wang disclose a method, wherein the correcting of the alignment comprises adapting a relative position of the projector and the camera and/or correcting the first and second reflection image ([0047], ll. 1-5; [0050]-[0051], the calibration as claimed provides the result of “correcting the first and second reflection image”).
As to claim 7, Wang discloses a method for determining a position of at least one object, wherein the method comprises calibrating at least one camera and at least one projector of a detector by using the method for calibrating, wherein the method further comprises the following steps: illuminating (i.e. N beams as disclosed) at least one object (i.e. uniform background as disclosed) with at least one illumination pattern comprising a plurality of illumination features (dots pattern shown in figures 4B-C) using the projector and imaging by using the camera at least one reflection image comprising a plurality of reflection features (reflected light from the dots forming the images) generated by the object in response to illumination by the illumination features, wherein each of the reflection features comprises at least one beam profile (inherent in using light); and determining at least one longitudinal coordinate (the known 3D location inherently has at least one coordinate that can be interpreted as a “longitudinal coordinate”) for each of the reflection features by analysis of its respective beam profile by using at least one evaluation device of the detector ([0048]j; [0050]-[0051]).
As to claim 9, Wang disclose and shows in figure 8, a detector for determining a position of at least one object, the detector comprising
at least one projector (812) for illuminating at least one object with at least one illumination pattern comprising a plurality of illumination features ([0075], ll. 1-4; [0078], ll. 1-3),
at least one camera (816) having at least one sensor element having a matrix of optical sensors (i.e. CCD), the optical sensors each having a light-sensitive area, wherein each optical sensor is designed to generate at least one sensor signal in response to an illumination of its respective light-sensitive area by a reflection light beam propagating from the object to the camera, wherein the camera is configured for imaging ([0076], ll. 1-7; [0078], ll. 1-6);
at least one reflection image comprising a plurality of reflection features generated by the object in response to illumination by the illumination features ([0051]; Fig. 4B); and
at least one evaluation device configured for performing the method according to claim 1 (822) ([0078]).
As to claim 10, Wang disclose a detector, wherein the projector comprises at least one emitter and/or at least one array of emitters (as disclosed as LEDs), wherein each of the emitters is and/or comprises at least one element selected from the group consisting of at least one laser source, at least one semi-conductor laser, at least one double heterostructure laser, at least one external cavity laser, at least one separate confinement heterostructure laser, at least one quantum cascade laser, at least one distributed Bragg reflector laser, at least one polariton laser, at least one hybrid silicon laser, at least one extended cavity diode laser, at least one quantum dot laser, at least one volume Bragg grating laser, at least one Indium Arsenide laser, at least one Gallium Arsenide laser, at least one transistor laser, at least one diode pumped laser, at least one distributed feedback lasers, at least one quantum well laser, at least one interband cascade laser, at least one semiconductor ring laser, at least one vertical cavity surface-emitting laser; at least one non-laser light source, at least one LED, or at least one light bulb ([0075], ll. 1-11).
As to claim 11, Wang discloses a detector, wherein the camera comprises at least one CCD sensor or at least one CMOS sensor ([0076], ll. 1-7).
As to claim 13, Wang discloses a mobile device comprising at least one detector, wherein the mobile device is one or more of a mobile communication device, a tablet computer, or a portable computer ([0074], ll. 7-10; [0077], ll. 1-6; where the examiner is interpreting the capsule as a mobile communication device, as it moves through a human GI track and communicates the results after measuring).
As to claim 14, Wang disclose a method of using the detector, the method comprising using the detector for a purpose selected from the group consisting of a position measurement in traffic technology; an entertainment application; a security application; a surveillance application; a safety application; a human-machine interface application; a logistics application; a tracking application; an outdoor application; a mobile application (i.e. a mobile capsule); a communication application; a photography application; a machine vision application; a robotics application; a quality control application; a manufacturing application; a gait monitoring application; a human body monitoring application; home care; smart living, and an automotive application ([0074], ll. 7-10; [0077], ll. 1-6; where the examiner is interpreting the capsule as a mobile communication device, as it moves through a human GI track and communicates the results after measuring).
Claim(s) 8 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. in view of Hillebrand et al. further in view of Schindler (WO 2019/042956 A1).
As to claims 8 and 12, Wang in view of Hillebrand does not explicitly disclose a method, wherein the analysis of the beam profile comprises determining at least one first area and at least one second area of the beam profile, wherein the evaluation device is configured for deriving a combined signal Q by one or more of dividing the first area and the second area, dividing multiples of the first area and the second area, or dividing linear combinations of the first area and the second area, wherein the determining of the longitudinal coordinate further comprises using at least one predetermined relationship between the combined signal Q and a longitudinal coordinate for determining the longitudinal coordinate of the reflection feature.
However, Schindler does disclose in (page 10, ll. 16-38) the use of an evaluation device that derives a combined signal Q by dividing first and second areas and where determining of the longitudinal coordinate uses a predetermined relationship between the combined Q signal and the longitudinal coordinate.
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Wang in view of Hillebrand with a method, wherein the analysis of the beam profile comprises determining at least one first area and at least one second area of the beam profile, wherein the evaluation device is configured for deriving a combined signal Q by one or more of dividing the first area and the second area, dividing multiples of the first area and the second area, or dividing linear combinations of the first area and the second area, wherein the determining of the longitudinal coordinate further comprises using at least one predetermined relationship between the combined signal Q and a longitudinal coordinate for determining the longitudinal coordinate of the reflection feature in order to provide the advantage of increased versatility as explicitly noted by Schindler this technique allows the determination to be independent from the material properties and/or reflective properties and/or scattering properties of the object and independent from alterations of the light source such as by manufacturing precision, heat, water, dirt, damages on the lens, or the like” (page 11, ll. 9-12).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL P LAPAGE whose telephone number is (571)270-3833. The examiner can normally be reached Monday-Friday 8-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tarifur Chowdhury can be reached at 571-272-2287. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Michael P LaPage/Primary Examiner, Art Unit 2877