DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Remarks
Claim Objections
The objections to the claims are withdrawn in light of Applicant’s amendments.
Claim Rejections - 35 USC § 112(a)
The rejections of the claims are withdrawn in light of Applicant’s amendments cancelling the claims.
Claim Rejections - 35 USC § 112(b)
The rejections of the claims are withdrawn in light of Applicant’s amendments cancelling the claims.
Claim Rejections - 35 USC § 103
Applicant's arguments filed 1/23/2026 have been fully considered but they are not persuasive.
First, Applicant appears to mischaracterize and/or misunderstand the disclosure of Kato.
Applicant states that “Kato discloses that a slit light projector emits a slit light that is aligned with a pattern. Alignment between the slit light and the pattern is used to determine an x-y position of the robot”. This characterization, however, ignores at least half of the disclosure of Kato. Kato also uses a plate with a 9 dot pattern, which was expressly recited in the rejections provided in the preceding Office Action dated 10/09/2025. See Figure 4 for an illustration thereof. Applicant does not appear to address this portion of the disclosure in any manner.
Second, Applicant states that “Kato fails to disclose that its system detects a size mismatch between the current image and the expected image”.
First, this appears to be a conclusory statement. Applicant does not provide any support with respect to the actual disclosure of Kato for the provided conclusion.
Second, Applicant does not define “size mismatch” or the nature of “detecting a size mismatch” within the claims. Applicant does not even appear to define “size mismatch” within the originally filed specification. See [0039] of Applicant’s originally filed specification wherein the term is not defined in any manner. Consequently, the broadest reasonable interpretation thereof is particularly broad under the plain meaning of the terms used in the phrase. Producing any information which is indicative of a size mismatch, wherein a size might relate to a size of any feature of anything in an image would appear to read on the claim, including the size of a coordinate relative to another, etc. A mismatch in size is inherent for any two patterns defined by multiple points that do not match.
Third, regardless of Applicant’s assertions, Kato discloses detecting size mismatch, particularly inasmuch as the nature thereof is undefined (see above). See at least Figures 3 and 4 which are used in the first portion of calibration per Column 1, Lines 57 – 62 of Kato “The invention also includes a calibration unit having a plurality of target plates, each bearing an image which is viewed by the robot camera and compared with a stored image to determine the magnitude of and out-of-tolerance condition of the camera and slit light unit” of Kato. The first target plate used has a “pattern of 9 black dots 46 spaced evenly on the plate against a contrasting light background” (Column 3, Lines 33 – 34). Kato also discloses generally how the correction factor/offset M is calculated. See Column 3, Lines 62 – 68, “The difference between the actual and desired position is calculated to yield a correction factor M, which is a function of the x, y, z, .theta. x, .theta. y, .theta. z coordinates which are added to the programmed coordinates for position A so that the perceived camera image will appear as in FIG. 4 when the correction factor M is added to the programmed coordinates for A” (emphasis added). And wherein the correction factor “M represents sets of spatial coordinates (x,y, z, .theta.x, .theta.y, .theta.z)” (Column 2, Lines 17 and 18). Any misalignment/offset results in the observed pattern or 2D shape defined thereby (See again Figures 3 and 4) to differ in size and/or shape resulting in necessary correction. Compare, for example, Figure 3 to Figure 4. They clearly illustrate by the connecting lines that the size of the lines are different in addition to the shapes being different.
Third, Applicant states that “Kato fails to disclose that its system … determines that a distance between the end of arm tooling and the interface object does not match an expected result based on the size mismatch”. As shown above, Kato compares an actual pattern image to an expected pattern image and determines an offset defined in “x,y, z, .theta.x, .theta.y, .theta.z”. The offset is directly dependent on the mismatch between the patterns, which will include one or more items considered a “size mismatch”, which is itself clearly indicated as detected inasmuch as the offset exists.
Fourth, Applicant states “Kato fails to make any disclosure at all related to a size of an image”.
It appears that Applicant may be arguing the size of the image itself, rather than a size of something in/captured by/etc. the image. This argument is not in alignment with the language of the claims. The claims recite “a size mismatch between the current image and the expected image”. There is no recitation of “a size of an image”. Furthermore, it does not appear supported or even to make logical sense, no discussion of varying resolutions, cropping or segmenting of images, or similar being described.
If instead Applicant is arguing that Kato does not disclose a “size”, Examiner refers Applicant again to the above discussion of size. To reiterate, Applicant fails to define “size”, “size mismatch”, etc. within the claims or even the specification such that it is not open to the particularly broad plain meaning of the term.
Fifth, Examiner again notes that MPEP 2163 strongly suggests that “Applicant should ... specifically point out the support for any amendments made to the disclosure”. Applicant’s limitations of:
“detecting a size mismatch between the current image and the expected image; and
determining that a distance between the end of arm tooling and the interface object does not match an expected result based on the size mismatch”
Either appear broader than argued in light of Applicant’s specification, or should be rejected under 35 USC § 112(a) as lacking support and being new matter. Presently, Examiner finds the limitations broad such that they are supported.
The only support Examiner found for the claimed limitations/features was found in [0039] which reads:
“In an embodiment in which the sensor of the robotic arm 210 is an optical sensor, such as an optical imaging device (such as a camera) and/or laser device, the optical sensor may be configured to determine a relative position of the end of arm tooling 218 and optical sensor relative to the interface object 260. For example, to determine a vertical, horizontal, and/or angular position of the end of arm tooling 218, the optical sensor may determine whether the optical sensor is properly aligned with the alignment feature 262 and whether the known shape and/or size of the alignment feature 262 matches an expected result. For example, the known size and shape of the alignment feature 262 may be associated with a preprogrammed expected image of the optical sensor when the robotic arm is properly positioned at an actual position of the interface object 260. Any mismatch in the current image of the optical sensor and the expected image may be indicative of the programmed position of the interface object 260 being offset from the actual position of the interface object 260. A size mismatch between the images may indicate that a distance between the robotic arm 210 and the actual position of the interface object 260 is incorrect and that the actual position of the interface object 260 does not match the programmed position. Based on the image comparisons, the sensor may cause a controller of the robotic arm 210 to adjust a position of the robotic arm 210 until the expected image and current image match. The programmed position may then be updated to match the measured actual position within the three-dimensional coordinate system”
It thus appears clear in light of Applicant’s disclosure that the limitation of:
“determining that a distance between the end of arm tooling and the interface object does not match an expected result based on the size mismatch”
simply means that an offset is found, rather than determining that a distance does not match another expected distance, wherein this occurs under a size mismatch condition (as it “may indicate”).
Examiner also notes the similarity of Kato and the disclosure. Compare “adjust a position of the robotic arm 210 until the expected image and current image match” of [0039] to Column 3, Lines 67 – 68 of Kato, “so that the perceived camera image will appear as in FIG. 4 when the correction factor M is added to the programmed coordinates for A”. [0039] appears to merely be describing how images will compare under given conditions, rather than directly discussing their detection let alone how they are specifically used. The same applies in Kato.
Consequently, these limitations do not appear to meaningfully add to the existing limitations which already require a comparison between optical sensor data and known size and shape of the alignment feature which involves comparing a current and expected image of the interface object and determining a vertical and horizontal position based on the comparison.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3 – 4, 6 – 11, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Eliuk et al. (US 20100241270 A1) in view of Kato (US 5083073 A).
Regarding Claim 1, Eliuk teaches:
A method of performing location teaching of a robotic arm, comprising:
maneuvering an end of arm tooling of a robotic arm (See at least [0065] “The processing chamber 204 includes a multiple degree of freedom robotic arm 218, and the robotic arm 218 further includes a gripper that can be used, for example, to pick items from a pocket on a rack or to grasp items within the APAS 100 for manipulation”) to an expected position of an interface object (See at least [0309] “Direct taught points in a local coordinate reference frame represent the location of the interfaces. A series of three taught points defines a reference frame for the robot 4100. The robot 4100 can work accurately within the referenced frame. The APAS controller commands the robot 4100 to move to various features in the APAS with known geometry” and [0313] “In a manual teaching process, the operator manually guides the robot to the taught reference point using a robot teach pendant and the assistance of software controls”), wherein:
the robotic arm is mounted within a mounting site of a mechanical mounting structure (See at least “robot mounting flange” of [0318], Figure 2, Figure 33A, and Figure 33B);
the interface object is positioned on a sub-system of a medication dosing system that is mounted on the mechanical mounting structure (See at least [0319] “The APAS teaches reference points directly in a nominal "world" frame for the robot, or, in most cases, the APAS determines a reference frame to use in interfacing to a subsystem or group of points in the APAS”, [0325] “For example, the robot 4400 with the touch probe teach tool 4402 teaches the height relationship of the vial fingers on the syringe manipulator device to the vial scale platen”, Figure 2 (Carousels 210, 212), Figure 5 (Syringe Manipulator Device 500), Figure 6 (Liquid Waste Container 600), Figure 7 (IV Bag Parking Location 700), Figure 34 (Product Output Chute 3400), Figure 37A, Figure 37B (Printer System 3700), Figure 38 (Printer Platen 3800), and Figure 45); and
the interface object comprises at least one alignment feature of a known size and shape (See at least Paragraph 309, “The APAS controller commands the robot 4100 to move to various features in the APAS with known geometry”);
detecting the interface object using an optical sensor disposed on the end of arm tooling (See at least [0317] “The robot gripper grasps the touch probe teach tool 4402 and uses it for teaching” and Figure 45, as well as [0326] “In some implementations, alternative types of sensor probes, such as a beam sensor or a laser range sensor, are affixed to the end of the robot” and [0329] “In some implementations, the APAS uses vision techniques for autonomous teaching of the robot. For example, a camera mounted on the robot locates subsystem features”
Examiner furthermore notes in the interest of compact prosecution the broadest reasonable interpretation of the claim term, though Eliuk would appear to teach a narrow interpretation, even what may have been intended by Applicant. Specifically, that “disposed on” has been interpreted as meaning “affixed to” as Applicant does not appear to use the phrasing “disposed on” or any word starting with “dispos” throughout their specification. The apparent support found in Applicant’s specification is [0037] instead reads “an optical sensor (not shown) affixed to the end of arm tooling”. Applicant furthermore does not appear to clearly demarcate what portion of a robotic arm is “end of arm tooling” and the claim and disclosure (no Drawing even being provided as noted) does not claim or disclose the optical sensor as being directly or indirectly attached. Therefore, as it may be directly or indirectly attached, and the exact nature of “end of arm tooling” is open to broad interpretation, the limitation is particularly broad); and
determining that the optical sensor of the end of arm tooling is offset from the interface object (See at least [0319] “FIG. 45 is an illustration of the touch probe teach tool 4402 in the process of autonomous point teaching. The touch probe teach tool 4402 finds, touches and teaches key points in the APAS”, [0320] “In some cases, the reference points may be considerably off nominal. For example, points in a new APAS for initialization with a set of "nominal" initial points will initially include points that are significantly off nominal”, [0325] “The APAS uses the robot 4400 with the touch probe teach tool 4402 to enable the robot itself to determine and update its interface relationships”, and Figure 45) …
determining at least one of a vertical position, a horizontal position, or an angular position of the end of arm tooling based on [a calibration] (See at least [0309], [0313], and [0319] again. Examiner notes that the claim merely recites “a vertical position, a horizontal position, or an angular position” and is highly non-specific. The APAS operates within a calibrated frame which as stated in [0009] “During the compounding process the APAS may align needles with a vial seal opening so as to ensure repeated entry through the same vial puncture site via precise control of needle position, needle bevel orientation, and needle entry speed”. It would therefore be clear to one of ordinary skill in the art that the orientation and 3D coordinate position of the end effector is inherently known (which includes a vertical, horizontal, and third lateral or similarly titled coordinate direction)).
Eliuk does not explicitly teach, but Kato teaches:
…
based at least in part on a comparison between data from the optical sensor and the known size and shape of the alignment feature, wherein the comparison comprises:
comparing a current image of the interface object with an expect image;
detecting a size mismatch between the current image and the expected image; and
determining that a distance between the end of arm tooling and the interface object does not match an expected result based on the size mismatch; and
[determining a vertical position, a horizontal position, and an angular position of the end of arm tooling based on] the comparison
(See at least Column 1, Line 63 through Column 2, Line 20, “In the preferred embodiment of the invention, the calibration method includes the steps of positioning the robot arm at a first calibration position such that the camera views a target pattern, comparing the target pattern with a stored pattern, calculating a correction value M representing the difference between the programmed and actual positions of the camera, and incorporating the camera correction value M for robot positioning during a subsequent operational movement. Similarly, the method of calibrating the slit light unit includes the steps of displacing the robot to a second calibration position B+M so that the slit light unit directs a light beam on a second target and the camera receives a second target image, determining a light correction value N between a desired slit light image and an actual slit light image by comparing the perceived target image with a stored target image, and incorporating the light correction value N for robot positioning during a subsequent operational movement. Consequently, all subsequent positioning of the robot is offset from predetermined locations by a factor of M+N, where M represents sets of spatial coordinates (x,y, z, .theta.x, .theta.y, .theta.z) and N represents sets of spatial coordinates (x,y, z), added to the programmed coordinates for each robot movement”, Column 4, Line 9 – Line 21, “The arm 12 is moved to position the camera 18 and light unit 20 at second calibration position B, which is now modified to a position B+M. The image perceived, for example that shown in FIG. 5, is received by the control 22 which detects point 49 and calculates the current slit image position with respect to the desired slit image shown in FIG. 6. A correction factor N is calculated, which represents the difference between the desired and actual positions of the image 28 in FIGS. 6 and 5, respectively. Correction factor N is a function of x, y, z coordinates and is added to the programmed series of positions of the robot 10”, and Figures 3 – 6 which illustrate actual/current vs stored/expected images for comparison and Figure 7 for general flowchart of process).
…
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to compare actual and expected images to determine offset and/or size mismatch for calibration as taught by Kato in the system of Eliuk with a reasonable expectation of success. Such comparisons for calibration purposes are well understood and routine in the art of computer vision calibration techniques and would serve as a particular means to achieve the vision technique calibration disclosed by Eliuk in [0329].
Furthermore and alternatively, it would have been obvious for Eliuk to utilize potentially one or more different types of probes including both a physical contact and optically operative sensor for the purposes of redundant but different calibration means.
Regarding Claim 3, the combination of Eliuk and Kato teaches:
The method of performing location teaching of a robotic arm of claim 1,
Eliuk further teaches:
further comprising:
incrementing a position of the end of arm tooling with respect to the interface object along at least one axis (See at least [0320], “In some implementations, the APAS uses techniques that allow the touch probe teach tool 4402 to safely maneuver and feel its way around the reference points to teach” and [0321], “The APAS control software uses a plurality of algorithms to teach a reference frame and reference points. The touch probe teach tool 4402 iteratively feels out each of the points associated with a reference frame”); and
determining an actual position of the interface object relative to the robotic arm (See at least [0325] “The APAS uses the robot 4400 with the touch probe teach tool 4402 to enable the robot itself to determine and update its interface relationships”).
Regarding Claim 4, the combination of Eliuk and Kato teaches:
The method of performing location teaching of a robotic arm of claim 1,
Eliuk further teaches:
wherein:
determining an actual position of the interface object by
detecting alignment between the interface object and the optical sensor (See at least [0319] “FIG. 45 is an illustration of the touch probe teach tool 4402 in the process of autonomous point teaching. The touch probe teach tool 4402 finds, touches and teaches key points in the APAS. The APAS teaches reference points directly in a nominal "world" frame for the robot, or, in most cases, the APAS determines a reference frame to use in interfacing to a subsystem or group of points in the APAS” and [0320] “In some implementations, the APAS uses techniques that allow the touch probe teach tool 4402 to safely maneuver and feel its way around the reference points to teach. In some cases, the reference points may be considerably off nominal. For example, points in a new APAS for initialization with a set of "nominal" initial points will initially include points that are significantly off nominal”); and
recording 3-dimensional coordinates associated with a current position of the interface object (See at least [0309] “Direct taught points in a local coordinate reference frame represent the location of the interfaces”, [0313] “The taught reference point (measurement data) can be saved in a robot controller. Alternatively, the taught reference point is transferred manually or autonomously to the APAS control computer. Alternatively, the taught reference point is saved in both the robot controller and the APAS control computer”, [0314] “The process for teaching a reference frame is the same as the process for teaching reference points. In the case of reference frame teaching, the process uses three points as reference frame points. The process calculates the reference frame and saves the reference frame. Alternatively, the process saves the reference frame as a series of reference points where the process calculates the reference frame at run time”, and [0325] “In some implementations, the robot 4400 with the touch probe teach tool 4402 is a coordinate-measuring machine (CMM) device that measures the physical geometrical characteristics of an object”. Examiner notes that the specification makes no indication that the robot operates or measures in less than three dimensions, and clearly illustrates operation in three dimensions, and that the claim is not specific as to the nature, data structure, etc.).
Regarding Claim 6, the combination of Eliuk and Kato teaches:
The method of performing location teaching of a robotic arm of claim 1,
Eliuk further teaches:
wherein:
the optical sensor comprises one or both of an imaging device and a laser (See at least [0326] “In some implementations, alternative types of sensor probes, such as a beam sensor or a laser range sensor, are affixed to the end of the robot” and [0329] “In some implementations, the APAS uses vision techniques for autonomous teaching of the robot. For example, a camera mounted on the robot locates subsystem features”).
Regarding Claim 7, the combination of Eliuk and Kato teaches:
The method of performing location teaching of a robotic arm of claim 1,
Eliuk further teaches:
further comprising:
determining a distance and axial direction of the offset based on the optical sensor (See at least [0319] “FIG. 45 is an illustration of the touch probe teach tool 4402 in the process of autonomous point teaching. The touch probe teach tool 4402 finds, touches and teaches key points in the APAS”, Paragraph 320, “In some cases, the reference points may be considerably off nominal. For example, points in a new APAS for initialization with a set of "nominal" initial points will initially include points that are significantly off nominal” and [0325] “The APAS uses the robot 4400 with the touch probe teach tool 4402 to enable the robot itself to determine and update its interface relationships”, and Figure 45. As the tool described and illustrated teaches the robot locations of interface objects or points through contact at the probe tip, determining how a taught point is off nominal must include determining a distance and axial direction or otherwise not function as a probe as described. Furthermore, an optical sensor such as a camera has been shown to perform the same operation and function.
Alternatively, see again recited portions of Kato with respect to Claim 1, in particular “where M represents sets of spatial coordinates (x,y, z, .theta.x, .theta.y, .theta.z)” which in relation to N provides a distance in x, y, z and an axial direction in theta.x, .theta.y, .theta.z).
Regarding Claim 8, the combination of Eliuk and Kato teaches:
The method of performing location teaching of a robotic arm of claim 1,
Eliuk further teaches:
further comprising:
determining an actual position of the interface object relative to the robotic arm; and
calibrating a controller of the robotic arm to know actual positions of each component of the sub-system based on at least the actual location of the interface object, the expected position of the interface object, and a known geometry of the sub-system (Examiner first notes that the nature of the basis of the calibration is not claimed and that “a known geometry of the subsystem” under the broadest reasonable interpretation of the term is particularly broad, for example a nominal or expected position of an object, or its relational position to another might be considered a “known geometry”. See at least [0320] “In some cases, the reference points may be considerably off nominal. For example, points in a new APAS for initialization with a set of "nominal" initial points will initially include points that are significantly off nominal” (emphasis added), [0325] “The APAS uses the robot 4400 with the touch probe teach tool 4402 to enable the robot itself to determine and update its interface relationships. Additionally, the APAS uses the robot 4400 with the touch probe teach tool 4402 in a local reference frame as a measuring device to teach interface relationships between other items in the APAS. In some implementations, the robot 4400 with the touch probe teach tool 4402 is a coordinate-measuring machine (CMM) device that measures the physical geometrical characteristics of an object. For example, the robot 4400 with the touch probe teach tool 4402 teaches the height relationship of the vial fingers on the syringe manipulator device to the vial scale platen. The APAS uses this relationship to control the robot when dropping off a vial on the vial scale platen” (emphasis added)).
Regarding Claim 9, Eliuk teaches:
A medication dosing system, comprising:
a mechanical mounting structure (See at least [0318] “robot mounting flange”, Figure 2, Figure 33A and Figure 33B) comprising a baseplate (the cabinet floor from which 218 emerges, as illustrated in Figure 2) and a sidewall that extends vertically relative to the baseplate (See various sidewalls as illustrated in at least Figure 2, Figure 33A, and Figure 33B);
at least one sub-system coupled with the sidewall (See at least Figure 2 (Carousels 210, 212), Figure 5 (Syringe Manipulator Device 500), Figure 6 (Liquid Waste Container 600), Figure 7 (IV Bag Parking Location 700), Figure 34 (Product Output Chute 3400), Figure 37A, Figure 37B (Printer System 3700), Figure 38 (Printer Platen 3800), Figure 33A, Figure 33B, and Figure 45);
an interface object affixed to the at least one sub-system (See at least [0319] “The APAS teaches reference points directly in a nominal "world" frame for the robot, or, in most cases, the APAS determines a reference frame to use in interfacing to a subsystem or group of points in the APAS”, [0325] “For example, the robot 4400 with the touch probe teach tool 4402 teaches the height relationship of the vial fingers on the syringe manipulator device to the vial scale platen”, [0059] “In some implementations, an automated pharmacy admixture system (APAS) includes sub-systems for automated fluid transfer operations among medicinal containers such as syringes, vials, and IV bags”, and Figure 45), the interface object comprising at least one alignment feature of a known size and shape (See at least [0309] “The APAS controller commands the robot 4100 to move to various features in the APAS with known geometry”);
a robotic arm coupled with the baseplate (See at least [0318] “robot mounting flange”, Figure 2, Figure 33A and Figure 33B), wherein the robotic arm comprises an end of arm tooling with an optical sensor (See at least [0326] “In some implementations, alternative types of sensor probes, such as a beam sensor or a laser range sensor, are affixed to the end of the robot. A sensor is mounted on a subsystem or interface point and the robot includes gripper fingers”);
a processor; and
a memory having instructions stored thereon that, when executed by the processor (See at least [0264] “The operations 3600 can be performed by a processor that executes instructions stored in a computer-readable medium”), cause the medication dosing system to:
maneuver the end of arm tooling to an expected position of the interface object (See at least [0329] “The APAS controller uses the location of the subsystem features to teach points or to refine teach points”);
detect the interface object using the optical sensor of the end of arm tooling (See at least [0317] “The robot gripper grasps the touch probe teach tool 4402 and uses it for teaching” and Figure 45, as well as [0326] “In some implementations, alternative types of sensor probes, such as a beam sensor or a laser range sensor, are affixed to the end of the robot” and [0329] “In some implementations, the APAS uses vision techniques for autonomous teaching of the robot. For example, a camera mounted on the robot locates subsystem features”); and
determine that the optical sensor of the end of arm tooling is offset from the interface object (See at least [0319] “FIG. 45 is an illustration of the touch probe teach tool 4402 in the process of autonomous point teaching. The touch probe teach tool 4402 finds, touches and teaches key points in the APAS”, [0320] “In some cases, the reference points may be considerably off nominal. For example, points in a new APAS for initialization with a set of "nominal" initial points will initially include points that are significantly off nominal”, [0325] “The APAS uses the robot 4400 with the touch probe teach tool 4402 to enable the robot itself to determine and update its interface relationships”, and Figure 45) …
determine at least one of a vertical position, a horizontal position, or an angular position of the end of arm tooling based on [a calibration] (See at least [0309], [0313], and [0319] again. Examiner notes that the claim merely recites “a vertical position, a horizontal position, or an angular position” and is highly non-specific. The APAS operates within a calibrated frame which as stated in [0009] “During the compounding process the APAS may align needles with a vial seal opening so as to ensure repeated entry through the same vial puncture site via precise control of needle position, needle bevel orientation, and needle entry speed”. It would therefore be clear to one of ordinary skill in the art that the orientation and 3D coordinate position of the end effector is inherently known (which includes a vertical, horizontal, and third lateral or similarly titled coordinate direction)).
Eliuk does not explicitly teach, but Kato teaches:
…
based at least in part on a comparison between data from the optical sensor and the known size and shape of the alignment feature, wherein the comparison comprises:
comparing a current image of the interface object with an expect image;
detecting a size mismatch between the current image and the expected image; and
determining that a distance between the end of arm tooling and the interface object does not match an expected result based on the size mismatch; and
[determining a vertical position, a horizontal position, and an angular position of the end of arm tooling based on] the comparison
(See at least Column 1, Line 63 through Column 2, Line 20, “In the preferred embodiment of the invention, the calibration method includes the steps of positioning the robot arm at a first calibration position such that the camera views a target pattern, comparing the target pattern with a stored pattern, calculating a correction value M representing the difference between the programmed and actual positions of the camera, and incorporating the camera correction value M for robot positioning during a subsequent operational movement. Similarly, the method of calibrating the slit light unit includes the steps of displacing the robot to a second calibration position B+M so that the slit light unit directs a light beam on a second target and the camera receives a second target image, determining a light correction value N between a desired slit light image and an actual slit light image by comparing the perceived target image with a stored target image, and incorporating the light correction value N for robot positioning during a subsequent operational movement. Consequently, all subsequent positioning of the robot is offset from predetermined locations by a factor of M+N, where M represents sets of spatial coordinates (x,y, z, .theta.x, .theta.y, .theta.z) and N represents sets of spatial coordinates (x,y, z), added to the programmed coordinates for each robot movement”, Column 4, Line 9 – Line 21, “The arm 12 is moved to position the camera 18 and light unit 20 at second calibration position B, which is now modified to a position B+M. The image perceived, for example that shown in FIG. 5, is received by the control 22 which detects point 49 and calculates the current slit image position with respect to the desired slit image shown in FIG. 6. A correction factor N is calculated, which represents the difference between the desired and actual positions of the image 28 in FIGS. 6 and 5, respectively. Correction factor N is a function of x, y, z coordinates and is added to the programmed series of positions of the robot 10”, and Figures 3 – 6 which illustrate actual/current vs stored/expected images for comparison and Figure 7 for general flowchart of process).
…
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to compare actual and expected images to determine offset and/or size mismatch for calibration as taught by Kato in the system of Eliuk with a reasonable expectation of success. Such comparisons for calibration purposes are well understood and routine in the art of computer vision calibration techniques and would serve as a particular means to achieve the vision technique calibration disclosed by Eliuk in [0329].
Regarding Claim 10, the combination of Eliuk and Kato teaches:
The medication dosing system of claim 9,
Eliuk further teaches:
wherein:
the interface object is positioned proximate an interaction point of the sub-system with which the robotic arm is configured to interact during operating of the medication dosing system (See at least [0319] “The APAS teaches reference points directly in a nominal "world" frame for the robot, or, in most cases, the APAS determines a reference frame to use in interfacing to a subsystem or group of points in the APAS”).
Regarding Claim 11, the combination of Eliuk and Kato teaches:
The medication dosing system of claim 9,
Eliuk further teaches:
wherein:
the interface object comprises a variable cross-sectional shape (Eliuk has already been shown to teach this limitation, at least as broadly as presently claimed. For example, Eliuk teaches the use of a variety of objects as “interface objects”, and any real three-dimensional shape can have a variable cross-sectional shape when the object can be cross-sectioned for such consideration in any of an infinite number of axial directions and the nature of variability is not claimed. See at least [0319] “The APAS teaches reference points directly in a nominal "world" frame for the robot, or, in most cases, the APAS determines a reference frame to use in interfacing to a subsystem or group of points in the APAS”, [0325] “For example, the robot 4400 with the touch probe teach tool 4402 teaches the height relationship of the vial fingers on the syringe manipulator device to the vial scale platen”, [0059] “In some implementations, an automated pharmacy admixture system (APAS) includes sub-systems for automated fluid transfer operations among medicinal containers such as syringes, vials, and IV bags”, and Figure 45).
Regarding Claim 15, the combination of Eliuk and Kato teaches:
The medication dosing system of claim 9,
Eliuk does not explicitly teach, but Kato teaches:
wherein:
comparing the current image of the interface object with an expected image comprises detecting an offset between the current image and the expected image (See at least Column 1, Line 63 through Column 2, Line 20, “In the preferred embodiment of the invention, the calibration method includes the steps of positioning the robot arm at a first calibration position such that the camera views a target pattern, comparing the target pattern with a stored pattern, calculating a correction value M representing the difference between the programmed and actual positions of the camera, and incorporating the camera correction value M for robot positioning during a subsequent operational movement. Similarly, the method of calibrating the slit light unit includes the steps of displacing the robot to a second calibration position B+M so that the slit light unit directs a light beam on a second target and the camera receives a second target image, determining a light correction value N between a desired slit light image and an actual slit light image by comparing the perceived target image with a stored target image, and incorporating the light correction value N for robot positioning during a subsequent operational movement. Consequently, all subsequent positioning of the robot is offset from predetermined locations by a factor of M+N, where M represents sets of spatial coordinates (x,y, z, .theta.x, .theta.y, .theta.z) and N represents sets of spatial coordinates (x,y, z), added to the programmed coordinates for each robot movement”, Column 4, Line 9 – Line 21, “The arm 12 is moved to position the camera 18 and light unit 20 at second calibration position B, which is now modified to a position B+M. The image perceived, for example that shown in FIG. 5, is received by the control 22 which detects point 49 and calculates the current slit image position with respect to the desired slit image shown in FIG. 6. A correction factor N is calculated, which represents the difference between the desired and actual positions of the image 28 in FIGS. 6 and 5, respectively. Correction factor N is a function of x, y, z coordinates and is added to the programmed series of positions of the robot 10”, and Figures 3 – 6 which illustrate actual/current vs stored/expected images for comparison and Figure 7 for general flowchart of process).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to compare actual and expected images to determine offset and/or size mismatch for calibration as taught by Kato in the system of Eliuk with a reasonable expectation of success. Such comparisons for calibration purposes are well understood and routine in the art of computer vision calibration techniques and would serve as a particular means to achieve the vision technique calibration disclosed by Eliuk in [0329].
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Eliuk et al. in view of Kato and further in view of Stoicescu et al. (US 20130183148 A1).
Regarding Claim 16, the combination of Eliuk and Kato teaches:
A medication dosing system, comprising:
a mechanical mounting structure (See at least Paragraph 318, “robot mounting flange”, Figure 2, Figure 33A and Figure 33B) comprising a baseplate (the cabinet floor from which 218 emerges, as illustrated in Figure 2) and a sidewall that extends vertically relative to the baseplate (See various sidewalls as illustrated in at least Figure 2, Figure 33A, and Figure 33B),
…
at least one sub-system coupled with the sidewall (See at least Figure 2 (Carousels 210, 212), Figure 5 (Syringe Manipulator Device 500), Figure 6 (Liquid Waste Container 600), Figure 7 (IV Bag Parking Location 700), Figure 34 (Product Output Chute 3400), Figure 37A, Figure 37B (Printer System 3700), Figure 38 (Printer Platen 3800), Figure 33A, Figure 33B, and Figure 45);
an interface object affixed to the at least one sub-system (See at [0319] “The APAS teaches reference points directly in a nominal "world" frame for the robot, or, in most cases, the APAS determines a reference frame to use in interfacing to a subsystem or group of points in the APAS”, Paragraph 325, “For example, the robot 4400 with the touch probe teach tool 4402 teaches the height relationship of the vial fingers on the syringe manipulator device to the vial scale platen”, Paragraph 59, “In some implementations, an automated pharmacy admixture system (APAS) includes sub-systems for automated fluid transfer operations among medicinal containers such as syringes, vials, and IV bags”, and Figure 45), the interface object comprising at least one alignment feature of a known size and shape (See at least [0309] “The APAS controller commands the robot 4100 to move to various features in the APAS with known geometry”); and
a robotic arm (See at least robotic arm 218) coupled with the baseplate (See at least [0318] “robot mounting flange”, Figure 2, Figure 33A and Figure 33B), wherein:
the robotic arm comprises an end of arm tooling with an optical sensor (See at least [0326] “In some implementations, alternative types of sensor probes, such as a beam sensor or a laser range sensor, are affixed to the end of the robot. A sensor is mounted on a subsystem or interface point and the robot includes gripper fingers”) that is configured to determine an actual location of the interface object (See at least [0319] “FIG. 45 is an illustration of the touch probe teach tool 4402 in the process of autonomous point teaching. The touch probe teach tool 4402 finds, touches and teaches key points in the APAS”, [0320] “In some cases, the reference points may be considerably off nominal. For example, points in a new APAS for initialization with a set of "nominal" initial points will initially include points that are significantly off nominal”, [0325] “The APAS uses the robot 4400 with the touch probe teach tool 4402 to enable the robot itself to determine and update its interface relationships”, and Figure 45);
the robotic arm is translatable in three dimensions to move the optical sensor relative to the interface object (See at least [0320] “In some implementations, the APAS uses techniques that allow the touch probe teach tool 4402 to safely maneuver and feel its way around the reference points to teach”, [0065] “The processing chamber 204 includes a multiple degree of freedom robotic arm 218, and the robotic arm 218 further includes a gripper that can be used, for example, to pick items from a pocket on a rack or to grasp items within the APAS 100 for manipulation”, and Figure 2, Figure 33A and Figure 33B); and
the robotic arm is configured to:
determine that the optical sensor of the end of arm tooling is offset from the interface object (See at least [0319] “FIG. 45 is an illustration of the touch probe teach tool 4402 in the process of autonomous point teaching. The touch probe teach tool 4402 finds, touches and teaches key points in the APAS”, [0320] “In some cases, the reference points may be considerably off nominal. For example, points in a new APAS for initialization with a set of "nominal" initial points will initially include points that are significantly off nominal”, [0325] “The APAS uses the robot 4400 with the touch probe teach tool 4402 to enable the robot itself to determine and update its interface relationships”, and Figure 45) …
determine at least one of a vertical position, a horizontal position, or an angular position of the end of arm tooling based on [a calibration] (See at least [0309], [0313], and [0319] again. Examiner notes that the claim merely recites “a vertical position, a horizontal position, or an angular position” and is highly non-specific. The APAS operates within a calibrated frame which as stated in [0009] “During the compounding process the APAS may align needles with a vial seal opening so as to ensure repeated entry through the same vial puncture site via precise control of needle position, needle bevel orientation, and needle entry speed”. It would therefore be clear to one of ordinary skill in the art that the orientation and 3D coordinate position of the end effector is inherently known (which includes a vertical, horizontal, and third lateral or similarly titled coordinate direction))
Eliuk does not teach, but in combination with Stoicescu et al. teaches:
…
wherein the mechanical mounting structure has known dimensions to within 0.010 inches of dimensions set forth in a design specification of the mechanical mounting structure (See iat least [0049] “The dimension provided in the Tables are subject to typical manufacturing tolerances of +/-0.010 inches on surface profile which have been considered and deemed acceptable to maintain the mechanical and aerodynamic function of these components. Thus, the mechanical and aerodynamic functions of the component are not impaired by manufacturing imperfections and tolerances, which in different embodiments may be greater or lesser than the values set forth in the disclosed Tables. As appreciated by those skilled in the art, manufacturing tolerances may be determined to achieve a desired mean and standard deviation of manufactured components in relation to the ideal component profile points set forth in the disclosed Tables”);
…
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have manufacturing tolerances of +/-0.010 inches as taught by Stoicescu when manufacturing the mechanical mounting structure of Eliuk with a reasonable expectation of success. +/-0.010 inches is a common manufacturing tolerance as illustrated by Stoicescu, and Eliuk provides CMM functions wherein high dimensional tolerances are appropriate to ensure accurate measurements and robotic control based thereon.
Eliuk does not explicitly teach, but Kato teaches:
…
based at least in part on a comparison between data from the optical sensor and the known size and shape of the alignment feature, wherein the comparison comprises:
comparing a current image of the interface object with an expect image;
detecting a size mismatch between the current image and the expected image; and
determining that a distance between the end of arm tooling and the interface object does not match an expected result based on the size mismatch; and
[determining a vertical position, a horizontal position, and an angular position of the end of arm tooling based on] the comparison
(See at least Column 1, Line 63 through Column 2, Line 20, “In the preferred embodiment of the invention, the calibration method includes the steps of positioning the robot arm at a first calibration position such that the camera views a target pattern, comparing the target pattern with a stored pattern, calculating a correction value M representing the difference between the programmed and actual positions of the camera, and incorporating the camera correction value M for robot positioning during a subsequent operational movement. Similarly, the method of calibrating the slit light unit includes the steps of displacing the robot to a second calibration position B+M so that the slit light unit directs a light beam on a second target and the camera receives a second target image, determining a light correction value N between a desired slit light image and an actual slit light image by comparing the perceived target image with a stored target image, and incorporating the light correction value N for robot positioning during a subsequent operational movement. Consequently, all subsequent positioning of the robot is offset from predetermined locations by a factor of M+N, where M represents sets of spatial coordinates (x,y, z, .theta.x, .theta.y, .theta.z) and N represents sets of spatial coordinates (x,y, z), added to the programmed coordinates for each robot movement”, Column 4, Line 9 – Line 21, “The arm 12 is moved to position the camera 18 and light unit 20 at second calibration position B, which is now modified to a position B+M. The image perceived, for example that shown in FIG. 5, is received by the control 22 which detects point 49 and calculates the current slit image position with respect to the desired slit image shown in FIG. 6. A correction factor N is calculated, which represents the difference between the desired and actual positions of the image 28 in FIGS. 6 and 5, respectively. Correction factor N is a function of x, y, z coordinates and is added to the programmed series of positions of the robot 10”, and Figures 3 – 6 which illustrate actual/current vs stored/expected images for comparison and Figure 7 for general flowchart of process).
…
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to compare actual and expected images to determine offset and/or size mismatch for calibration as taught by Kato in the system of Eliuk or Eliuk in combination with Stoicescu with a reasonable expectation of success. Such comparisons for calibration purposes are well understood and routine in the art of computer vision calibration techniques and would serve as a particular means to achieve the vision technique calibration disclosed by Eliuk in [0329].
Claims 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Eliuk et al. in view of Stoicescu et al., further in view of Kato and Eliuk itself.
Regarding Claim 17, the combination of Eliuk, Stoicescu, and Kato teaches:
The medication dosing system of claim 16,
Eliuk further teaches, in a combination of disclosed teachings:
wherein:
the at least one alignment feature comprises a computer-readable pattern that enables the optical sensor to align the end of arm tooling with the interface object (Examiner notes that this appears to be taught as a separate embodiment or set of teachings. See at least Paragraph 329, “The APAS controller uses the location of the subsystem features to teach points or to refine teach points. In another example, a camera on a subsystem is used to teach robot or other interface positions. The robot gripper fingers include fiducial marks that enable the gripper fingers to be located in the field of view of a camera included a syringe capper station. The APAS controller uses this information to refine the robot position in the field of view of the camera to increase the accuracy of syringe capping. FIGS. 57-62 of previously incorporated by reference U.S. patent application Ser. No. 11/389,995, entitled "Automated Pharmacy Admixture System," and filed by Eliuk et al. on Mar. 27, 2006 show a syringe capping station”).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to combine the features, embodiments, teachings, etc. of Eliuk of the fiducial marks of the gripper finger with the interface object directly such that the pattern is used to align the end of arm tooling with a reasonable expectation of success. Eliuk explicitly discloses combination, supplementation, etc. of features, components, etc. See at least [0340]. Such a combination as above would allow alignment of the optical sensor of the end of arm tooling with a subsystem.
Regarding Claim 20, Examiner believes that the combination of Eliuk and Stoicescu has already been shown to teach an equivalent limitation with respect to Claim 17 above. Examiner finds the “fiducial marks” of Eliuk to read on a “decal”, particularly where the nature of the decal is not claimed in any particularity, and especially where Applicant’s specification does not appear to provide any details which might exclude a fiducial mark. Therefore Claim 20 is presently rejected for the same reasons as presented with respect to Claim 17 above.
Claim 20 reads:
The medication dosing system of claim 16, wherein:
the interface object comprises a decal.
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Eliuk et al. in view of Stoicescu et al., further in view of Kato, Eliuk itself, and Furness (US 5321353 A).
Regarding Claim 18, the combination of Eliuk, Kato, and Stoicescu teaches:
The medication dosing system of claim 17,
The combination of Eliuk, Stoicescu, and Kato does not explicitly teach, but Furness teaches:
wherein:
the optical sensor (camera 228, Figure 3) is configured to determine a horizontal distance (d0, d1, Figure 3) between the end of arm tooling and the interface object (Target 214, Figure 2B).
Examiner notes that Kato does disclose finding an offset, but whether it is considered as between the end of arm tooling and the interface object is subject to interpretation. Therefore, the above rejection is provided for greater clarity.
It would have been obvious to one of ordinary skill in the art at the time of the invention to determine the horizontal distance between the end of arm tooling and the interface object to aid in object grasping, subsystem location calibration, interface object locating, and robot arm calibration. See at least Furness, Column 5, Lines 34 – 40, “However, coarse positioning, using these mechanical position determining devices, is not precise enough to facilitate rapid tape retrieval. This imprecision results from mechanical positioning variables such as belt stretch, friction, and tray tolerances. Accordingly, the AISS 200 uses camera 228 to fine tune the positioning of robotic tool 227”).
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Eliuk et al. in view of Kato, further in view of Stoicescu et al. and Friedman et. al (US 20130096718 A1).
Regarding Claim 19, the combination of Eliuk, Kato, and Stoicescu teaches:
The medication dosing system of claim 16,
The combination of Eliuk, Kato, and Stoicescu does not teach, but Friedman teaches:
wherein:
the interface object comprises a generally conical profile (Examiner notes that this is particularly broad, as the the rest of the nature of the “interface object” is not claimed with any particularity.
See at least Offset Target 118, [0043] “In the depicted embodiment, the first docking feature 116 comprises a conical (inclined) surface formed on an underside of the offset tool 114. … The second docking feature 120 of the offset target 118 may also include inclined surfaces, which are adapted to interface with the engagement surfaces of the first docking feature 116”, and [0044] “However, a conical surface or other inclined or curved surface shape may be employed as the second docking feature 120. … The docking features 116A, 120A may both include conical surfaces as shown in FIG. 1I, but matching such features may introduce later shift errors. … However, a configuration as shown in FIG. 1H including docking features 116A, 120A (or 116, 120 as shown in FIG. 1H) including a spherical surface on one member and a conical surface on the other provides excellent location precision”).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to use a conical profile as taught by Friedman in the interface object of Eliuk with a reasonable expectation of success. Such a shape would aid in determining offsets, particularly if used as part of a combined calibration technique involving both optical sensor calibration and direct physical interfacing via an end effector (See at least [0044] of Friedman “Any suitable geometry that allows for centering of the offset target 118 in the X and/or Y coordinate directions may be used”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW C GAMMON whose telephone number is (571)272-4919. The examiner can normally be reached M - F 10:00 - 6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ADAM MOTT can be reached on (571) 270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW C GAMMON/Examiner, Art Unit 3657
/ADAM R MOTT/Supervisory Patent Examiner, Art Unit 3657