DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
The claims being considered in this application are those submitted on 11/18/2024. Claims 1-10 are pending.
Priority
The applicant’s claim to priority of EP22179085.0 on 06/15/2022 is acknowledged.
Information Disclosure Statement
The information disclosure statement filed on 11/18/2024 has been annotated and considered.
Claim Objections
Claim 2 is objected to because of the following informalities: “so as to determine candidate regions of the exposed objects that meet exceed the graspability criteria” appears to be missing the word “or” in between “meet” and “exceed” Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 5-7, and 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Humayun et. al. (US 20220016766 A1, IDS).
Regarding Claim 1, Humayun discloses:
A computer-implemented method, the method comprising: (See at least Figure 1)
obtaining a depth image, the depth image defining a plurality of objects arranged in a plurality of respective positions; (See at least Figure 1 via S100 and ¶0031 via "The imaging systems can determine one or more RGB images, depth images (e.g., pixel aligned with the RGB, wherein the RGB image and the depth image can be captured by the same or different sensor sets)" as well as ¶0052 via "Labelling an image can include: capturing an image by the imaging system;…" and Figures 2A, and 4-9)
identifying a set of objects of the plurality of objects that each define a respective surface that is exposed to an end effector of a robot that is configured to grasp the plurality of objects, so as to identify exposed objects; (See at least ¶0049 via "The scene is preferably a ‘dense’ object scene, which can include a plurality of overlapping objects (e.g., where one or more objects are occluded by another object within the scene; the object scene can include a first plurality of objects that partially occludes a second plurality of objects; etc.)" *Wherein the recognition that some objects are occluded by other objects indicates the respective surface that is not occluded/is exposed*)
determining a grasp location on each of the exposed objects, the grasp location defining an area of the respective exposed object at which the end effector contacts the exposed object so as to grasp the exposed object; (See at least Figure 1 and ¶0053 via "The grasp point can be selected based on the object parameters determined by the object detector (e.g., using an object selector), using heuristics (e.g., proximity to an edge of the object container, amount of occlusion, height, keypoint type or keypoint label, etc.).")
generating a probability map that includes respective grasp annotations at the grasp location on each of the exposed objects; (See at least ¶0040 via "The computing system can optionally include a depth enhancement network 152 which functions to generate a refined depth map from an image (e.g., such as a RGB image and/or input depth image)" and ¶0070 via "The graspability map preferably includes a grasp success probability for each image feature (e.g., pixel (i, j), superpixel, pixel block, pixel set, etc.), but can alternatively include a grasp failure probability, a grasp score, object parameters (e.g., wherein the network is trained based on the object parameter values for the grasp points; such as object surface normals), end effector parameters (e.g., wherein the network is trained based on the robotic manipulator parameters for the training grasps; such as gripper pose, gripper force, etc.), a confidence score (e.g., for the grasp score, grasp probability, object parameter, end effector parameter, etc.)". Additionally ¶0054 via the annotations: "The images can be labelled based on grasp outcome (e.g., grasp success or grasp failure) of an object at a point associated with a selected pixel (x, y) of the image (e.g., the physical point on an object can be mapped to the pixel in the image, the image pixel can be selected and mapped to the physical point on an object, etc.), a region of pixels, a coordinate position (e.g., sensor frame, cartesian frame, joint frame, etc.), detected object region, and/or other suitable image features/coordinates.")
the depth image and the probability map defining an annotated synthetic dataset; and (See at least ¶0072 via "The graspability map is preferably related to the object detections (e.g., output by the object detector) via the image (e.g., via the image features of the image), but can alternatively be related to the object detections through the physical scene (e.g., wherein both the object detections and the grasp scores are mapped to a 3D representation of the scene to determine object parameter-grasp score associations), be unrelated, or be otherwise related." and also ¶0035 via "The object detector is preferably trained on synthetic images (e.g., trained using a set of artificially-generated object scenes), but can alternatively be trained on images of real scenes and/or other image")
training a neural network, with the annotated synthetic dataset, to determine grasp locations on objects arranged in a plurality of configurations (See at least ¶0073 via "The graspability network can be trained using supervised learning (e.g., using the outcome-labelled grasp points in the images as the labeled dataset), unsupervised learning, reinforcement learning (e.g., by grasping at the scene and getting a reward whenever the grasp was successful), and/or otherwise trained" and ¶0075 via "The graspability network is preferably trained based on the labelled images. The labelled images can include: the image (e.g., RGB, RGB-D, RGB and point cloud, etc.), grasp point (e.g., the image features depicting a 3D physical point to grasp in the scene), and grasp outcome; and optionally the object parameters (e.g., object pose, surface normal, etc.), effector parameters (e.g., end effector pose, grasp pose, etc.), and/or other information. In particular, the graspability network is trained to predict the outcome of a grasp attempt at the grasp point, given the respective image as the input.").
Regarding Claim 2, Humayun discloses the method as recited in Claim 1.
Furthermore, Humayun discloses: the method further comprising: comparing the exposed objects to graspability criteria that is based on the end effector, so as to determine candidate regions of the exposed objects that meet exceed the graspability criteria (See at least ¶0070 via "The graspability network can output a graspability map (e.g., a grasp heatmap), grasp score (e.g., per pixel, per object, etc.), pixel selection, and/or any other suitable information. The graspability map preferably includes a grasp success probability for each image feature (e.g., pixel (i, j), superpixel, pixel block, pixel set, etc.), but can alternatively include a grasp failure probability, a grasp score, object parameters (e.g., wherein the network is trained based on the object parameter values for the grasp points; such as object surface normals), end effector parameters (e.g., wherein the network is trained based on the robotic manipulator parameters for the training grasps; such as gripper pose, gripper force, etc.), a confidence score (e.g., for the grasp score, grasp probability, object parameter, end effector parameter, etc.)" as well as ¶0054 via "The grasp point can be selected based on the object parameters determined by the object detector (e.g., using an object selector), using heuristics (e.g., proximity to an edge of the object container, amount of occlusion, height, keypoint type or keypoint label, etc.)" as well as ¶0077 via " The selected grasp point can be the point with the highest probability of success (e.g., if there is a tie, randomly select a point with the highest probability), a point with more than a threshold probability of success (e.g., more than 90%, 80%, 70%, 60%, 40%, 30%, etc.)").
Regarding Claim 5, Humayun discloses the method as recited in Claim 1.
Furthermore, Humayun discloses: wherein the plurality of configurations includes objects positioned in a container so as to at least partially be stacked on top of each other, the objects defining different shapes and sizes as compared to each other (See at least ¶0049 via "The scene is preferably a ‘dense’ object scene, which can include a plurality of overlapping objects (e.g., where one or more objects are occluded by another object within the scene; the object scene can include a first plurality of objects that partially occludes a second plurality of objects; etc.). In a specific example, the vertical (top down) projection of a first object partially overlaps a second object within the scene…The objects within the scene can be homogeneous (e.g., identical and/or duplicative instances of a particular type of object; same object class—cylinders, spheres, similar pill bottles with different labels, etc.) or heterogenous.". Additionally see at least Figures 2A-2B which depict different objects with different shapes/sizes).
PNG
media_image1.png
522
516
media_image1.png
Greyscale
Regarding Claim 6, Humayun discloses:
A system comprising a robot defining an end effector configured to grasp objects, the system further comprising: a processor; and a memory storing instructions that, when executed by the processor, configure the system to: (See at least Figures 2A-2B and ¶0124)
obtain a depth image, the depth image defining a plurality of objects arranged in a plurality of respective positions; (See at least Figure 1 via S100 and ¶0031 via "The imaging systems can determine one or more RGB images, depth images (e.g., pixel aligned with the RGB, wherein the RGB image and the depth image can be captured by the same or different sensor sets)" as well as ¶0052 via "Labelling an image can include: capturing an image by the imaging system;…" and Figures 2A, and 4-9)
identify a set of objects of the plurality of objects that each define a respective surface that is exposed to the end effector of the robot, so as to identify exposed objects; (See at least ¶0049 via "The scene is preferably a ‘dense’ object scene, which can include a plurality of overlapping objects (e.g., where one or more objects are occluded by another object within the scene; the object scene can include a first plurality of objects that partially occludes a second plurality of objects; etc.)" *Wherein the recognition that some objects are occluded by other objects indicates the respective surface that is not occluded/is exposed*)
determine a grasp location on each of the exposed objects, the grasp location defining an area of the respective exposed object at which the end effector contacts the exposed object so as to grasp the exposed object; (See at least Figure 1 and ¶0053 via "The grasp point can be selected based on the object parameters determined by the object detector (e.g., using an object selector), using heuristics (e.g., proximity to an edge of the object container, amount of occlusion, height, keypoint type or keypoint label, etc.).")
generate a probability map that includes respective grasp annotations at the grasp location on each of the exposed objects; (See at least ¶0040 via "The computing system can optionally include a depth enhancement network 152 which functions to generate a refined depth map from an image (e.g., such as a RGB image and/or input depth image)" and ¶0070 via "The graspability map preferably includes a grasp success probability for each image feature (e.g., pixel (i, j), superpixel, pixel block, pixel set, etc.), but can alternatively include a grasp failure probability, a grasp score, object parameters (e.g., wherein the network is trained based on the object parameter values for the grasp points; such as object surface normals), end effector parameters (e.g., wherein the network is trained based on the robotic manipulator parameters for the training grasps; such as gripper pose, gripper force, etc.), a confidence score (e.g., for the grasp score, grasp probability, object parameter, end effector parameter, etc.)". Additionally ¶0054 via the annotations: "The images can be labelled based on grasp outcome (e.g., grasp success or grasp failure) of an object at a point associated with a selected pixel (x, y) of the image (e.g., the physical point on an object can be mapped to the pixel in the image, the image pixel can be selected and mapped to the physical point on an object, etc.), a region of pixels, a coordinate position (e.g., sensor frame, cartesian frame, joint frame, etc.), detected object region, and/or other suitable image features/coordinates.")
the depth image and the probability map defining an annotated synthetic dataset; and (See at least ¶0072 via "The graspability map is preferably related to the object detections (e.g., output by the object detector) via the image (e.g., via the image features of the image), but can alternatively be related to the object detections through the physical scene (e.g., wherein both the object detections and the grasp scores are mapped to a 3D representation of the scene to determine object parameter-grasp score associations), be unrelated, or be otherwise related." and also ¶0035 via "The object detector is preferably trained on synthetic images (e.g., trained using a set of artificially-generated object scenes), but can alternatively be trained on images of real scenes and/or other image")
train a neural network, with the annotated synthetic dataset, to determine grasp locations on objects arranged in a plurality of configurations (See at least ¶0073 via "The graspability network can be trained using supervised learning (e.g., using the outcome-labelled grasp points in the images as the labeled dataset), unsupervised learning, reinforcement learning (e.g., by grasping at the scene and getting a reward whenever the grasp was successful), and/or otherwise trained" and ¶0075 via "The graspability network is preferably trained based on the labelled images. The labelled images can include: the image (e.g., RGB, RGB-D, RGB and point cloud, etc.), grasp point (e.g., the image features depicting a 3D physical point to grasp in the scene), and grasp outcome; and optionally the object parameters (e.g., object pose, surface normal, etc.), effector parameters (e.g., end effector pose, grasp pose, etc.), and/or other information. In particular, the graspability network is trained to predict the outcome of a grasp attempt at the grasp point, given the respective image as the input.").
Regarding Claim 7, Humayun discloses the system as recited in Claim 6.
Furthermore, Humayun discloses: the memory further storing instructions that, when executed by the processor, further configure the system to: compare the exposed objects to graspability criteria that is based on the end effector, so as to determine the candidate regions of the exposed objects that meet or exceed the graspability criteria (See at least ¶0070 via "The graspability network can output a graspability map (e.g., a grasp heatmap), grasp score (e.g., per pixel, per object, etc.), pixel selection, and/or any other suitable information. The graspability map preferably includes a grasp success probability for each image feature (e.g., pixel (i, j), superpixel, pixel block, pixel set, etc.), but can alternatively include a grasp failure probability, a grasp score, object parameters (e.g., wherein the network is trained based on the object parameter values for the grasp points; such as object surface normals), end effector parameters (e.g., wherein the network is trained based on the robotic manipulator parameters for the training grasps; such as gripper pose, gripper force, etc.), a confidence score (e.g., for the grasp score, grasp probability, object parameter, end effector parameter, etc.)" as well as ¶0054 via "The grasp point can be selected based on the object parameters determined by the object detector (e.g., using an object selector), using heuristics (e.g., proximity to an edge of the object container, amount of occlusion, height, keypoint type or keypoint label, etc.)" as well as ¶0077 via " The selected grasp point can be the point with the highest probability of success (e.g., if there is a tie, randomly select a point with the highest probability), a point with more than a threshold probability of success (e.g., more than 90%, 80%, 70%, 60%, 40%, 30%, etc.)").
Regarding Claim 10, Humayun discloses the system as recited in Claim 6.
Furthermore, Humayun discloses: wherein the plurality of configurations includes objects positioned in a container so as to be at least partially stacked on top of each other, the objects defining different shapes and sizes as compared to each other (See at least ¶0049 via "The scene is preferably a ‘dense’ object scene, which can include a plurality of overlapping objects (e.g., where one or more objects are occluded by another object within the scene; the object scene can include a first plurality of objects that partially occludes a second plurality of objects; etc.). In a specific example, the vertical (top down) projection of a first object partially overlaps a second object within the scene…The objects within the scene can be homogeneous (e.g., identical and/or duplicative instances of a particular type of object; same object class—cylinders, spheres, similar pill bottles with different labels, etc.) or heterogenous.". Additionally see at least Figures 2A-2B which depict different objects with different shapes/sizes.).
PNG
media_image1.png
522
516
media_image1.png
Greyscale
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3-4 and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Humayun et. al. (US 20220016766 A1, IDS) in view of Deacon et. al. (US 20210171283 A1).
Regarding Claim 3, Humayun discloses the method as recited in Claim 2.
Furthermore, Humayun discloses: wherein the end effector defines a vacuum-based gripper, the method further comprising: (See at least ¶0029 via "In a first example, the end effector is a suction gripper").
However, although Humayun discloses the consideration of surface normals when determining grasp success probabilities (See at least ¶0052 via "The label can optionally include a label for: the object parameters for the point (e.g., as output by the object detector, such as the surface normal, a face tag, etc.)" as well as ¶0070); Humayun does not explicitly disclose the planar scores.
Nevertheless, Deacon--who is directed towards a robotic system that calculates appropriate grasp points for picking objects--discloses: evaluating the candidate regions so as to determine a planar score associated with each candidate region, the planar scores indicative of a curvature defined by the respective candidate region (See at least ¶0094 via "Each principal curvature norm of each grasp point candidate is also compared to the curvature threshold calculated by the threshold calculating means 1321. In particular, if the curvature of the grasp point candidate is less than the curvature threshold then the grasp point candidate is retained as a grasp point candidate. In this way, grasp point candidates which have a low curvature are retained whilst those with high curvatures are discarded." as well as ¶0087 via "the final grasp point selected lies in an area of a segment which is flat (in other words, the grasp point candidate and surrounding points are of a relatively low curvature) relative to other grasp point candidates and which is close in distance to the centroid of its respective segment relative to other grasp point candidates").
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Humayun in view of Deacon's evaluation of principle curvature norms/flatness ratings (planar scores) at each grasp point (candidate region) to be able to select a candidate grasp point that is most likely to have a successful grasp based on the seal that would be formed between the suction grasper and the object geometry/curvature: "The use of a suction cup end effector permits the use of a single grasp point for grasping objects, which makes it well-suited to lifting individual items out of a tightly-packed array, where many other kinds of end effector would fail, due to the lack of space around the items for their protruding parts." [Deacon ¶0045].
Regarding Claim 4, Modified Humayun discloses the method as recited in Claim 3.
Furthermore, Humayun discloses grasp annotations (See at least ¶0054 via the annotations: "The images can be labelled based on grasp outcome (e.g., grasp success or grasp failure) of an object at a point associated with a selected pixel (x, y) of the image…").
However, Humayun does not explicitly disclose the comparing planar scores to a predetermined threshold. Nevertheless, Deacon discloses: the method further comprising: making a comparison of each planar score to a predetermined threshold; and based on the comparison, determining the grasp annotations associated with each exposed object (See at least ¶0094 via "Each principal curvature norm of each grasp point candidate is also compared to the curvature threshold calculated by the threshold calculating means 1321. In particular, if the curvature of the grasp point candidate is less than the curvature threshold then the grasp point candidate is retained as a grasp point candidate. In this way, grasp point candidates which have a low curvature are retained whilst those with high curvatures are discarded." *Wherein the grasp points with low curvature being retained is also determining grasp annotations. Additionally, see at least ¶0087 via "the final grasp point selected lies in an area of a segment which is flat (in other words, the grasp point candidate and surrounding points are of a relatively low curvature) relative to other grasp point candidates and which is close in distance to the centroid of its respective segment relative to other grasp point candidates").
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Humayun in view of Deacon's evaluation of principle curvature norms/flatness ratings (planar scores) at each grasp point (candidate region) to be able to select a candidate grasp point that is most likely to have a successful grasp based on the seal that would be formed between the suction grasper and the object geometry/curvature: "The use of a suction cup end effector permits the use of a single grasp point for grasping objects, which makes it well-suited to lifting individual items out of a tightly-packed array, where many other kinds of end effector would fail, due to the lack of space around the items for their protruding parts." [Deacon ¶0045].
Regarding Claim 8, Humayun discloses the system as recited in Claim 7.
Furthermore, Humayun discloses: wherein the end effector defines a vacuum-based gripper, and the memory further stores instructions that, when executed by the processor, further configure the system to: (See at least ¶0029 via "In a first example, the end effector is a suction gripper". Additionally, see at least Figures 2A-2B and ¶0124)
However, although Humayun discloses the consideration of surface normals when determining grasp success probabilities (See at least ¶0052 via "The label can optionally include a label for: the object parameters for the point (e.g., as output by the object detector, such as the surface normal, a face tag, etc.)" as well as ¶0070); Humayun does not explicitly disclose the planar scores.
Nevertheless, Deacon--who is directed towards a robotic system that calculates appropriate grasp points for picking objects--discloses: evaluate the candidate regions so as to determine a planar score associated with each candidate region, the planar score indicative of a curvature defined by the respective candidate region (See at least ¶0094 via "Each principal curvature norm of each grasp point candidate is also compared to the curvature threshold calculated by the threshold calculating means 1321. In particular, if the curvature of the grasp point candidate is less than the curvature threshold then the grasp point candidate is retained as a grasp point candidate. In this way, grasp point candidates which have a low curvature are retained whilst those with high curvatures are discarded." as well as ¶0087 via "the final grasp point selected lies in an area of a segment which is flat (in other words, the grasp point candidate and surrounding points are of a relatively low curvature) relative to other grasp point candidates and which is close in distance to the centroid of its respective segment relative to other grasp point candidates").
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Humayun in view of Deacon's evaluation of principle curvature norms/flatness ratings (planar scores) at each grasp point (candidate region) to be able to select a candidate grasp point that is most likely to have a successful grasp based on the seal that would be formed between the suction grasper and the object geometry/curvature: "The use of a suction cup end effector permits the use of a single grasp point for grasping objects, which makes it well-suited to lifting individual items out of a tightly-packed array, where many other kinds of end effector would fail, due to the lack of space around the items for their protruding parts." [Deacon ¶0045].
Regarding Claim 9, Modified Humayun discloses the system as recited in Claim 8.
Furthermore, Humayun discloses: the memory further storing instructions that, when executed by the processor, further configure the system to: (See at least Figures 2A-2B and ¶0124)
grasp annotations (See at least ¶0054 via the annotations: "The images can be labelled based on grasp outcome (e.g., grasp success or grasp failure) of an object at a point associated with a selected pixel (x, y) of the image…").
However, Humayun does not explicitly disclose the comparing planar scores to a predetermined threshold. Nevertheless, Deacon discloses: make a comparison of each planar score to a predetermined threshold; and based on the comparison, determine the grasp annotations associated with each exposed object (See at least ¶0094 via "Each principal curvature norm of each grasp point candidate is also compared to the curvature threshold calculated by the threshold calculating means 1321. In particular, if the curvature of the grasp point candidate is less than the curvature threshold then the grasp point candidate is retained as a grasp point candidate. In this way, grasp point candidates which have a low curvature are retained whilst those with high curvatures are discarded." *Wherein the grasp points with low curvature being retained is also determining grasp annotations. Additionally, see at least ¶0087 via "the final grasp point selected lies in an area of a segment which is flat (in other words, the grasp point candidate and surrounding points are of a relatively low curvature) relative to other grasp point candidates and which is close in distance to the centroid of its respective segment relative to other grasp point candidates").
Therefore, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the given invention to modify Humayun in view of Deacon's evaluation of principle curvature norms/flatness ratings (planar scores) at each grasp point (candidate region) to be able to select a candidate grasp point that is most likely to have a successful grasp based on the seal that would be formed between the suction grasper and the object geometry/curvature: "The use of a suction cup end effector permits the use of a single grasp point for grasping objects, which makes it well-suited to lifting individual items out of a tightly-packed array, where many other kinds of end effector would fail, due to the lack of space around the items for their protruding parts." [Deacon ¶0045].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAYLA RENEE DOROS whose telephone number is (703)756-1415. The examiner can normally be reached Generally: M-F (8-5) EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.R.D./Examiner, Art Unit 3657
/ABBY LIN/ Supervisory Patent Examiner, Art Unit 3657