Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is in response to Application No. 18/129537, filed on 08-DEC-2025. Claims 1-3, 5-13, and 15-20 are currently pending and have been examined. Claims 4 and 14 are cancelled. Claims 1, 3, 5, and 6 have been rejected as follows.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08-DEC-2025 has been entered.
Response to Amendment
The amendment filed on 08-DEC-2025 has been entered. Claims 1-3, 5-13, and 15-20 remain pending in the application.
Response to Arguments
Applicant’s arguments, filed 08-DEC-2025, with respect to the rejections of claims under 103 have been fully considered. The claim amendments change the scope of the rejection and a new rejection in light of Taamazyan (US 2022/0405506 A1) can be seen below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 6-9, 11-13, and 16-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Taamazyan (US 2022/0405506 A1) in view of Otsuka (US 2017/0132824 A1)
Regarding claim 1, Taamazyan teaches: A robotic system comprising: a robotic arm comprising an end-effector (Figure 6A; element 24, 26); an illumination unit comprising a plurality of single-color light sources of different colors (element 43, 16; Paragraph [125, 127]); a structured-light projector to project codified light patterns onto a scene (Paragraph [118], "embodiments of the present disclosure are not limited thereto and may also include circumstances where one or more active light projector are included in the camera system, thereby forming an active camera system, where the active light projector may be configured to project structured light or a pattern onto the scene"); one or more cameras to capture pseudo-color images of the scene illuminated, by the single-color light sources of different colors and images of the scene with the projected codified light patterns (element 10, 14), and a computer system comprising a processor and a storage device storing instructions that when executed by the processor cause the processor to perform a method (Paragraph [61]), the method comprising: determining a pose of a component of interest based on pseudo-color images of the scene and the images of the scene with codified light patterns (element 100; Figure 6B), wherein determining the pose comprises inputting the images to a neural network (Paragraph [63]) via a set of corresponding input channels (Paragraph [130], “In some embodiments, a demosaicing process is used to compute separate red, green, and blue channels from the raw data”); generating a motion plan for the end-effector (element 9; Paragraph [53, 72]) based on the determined pose of the component and a current pose of the end-effector (Figure 5); and controlling movement of the end-effector according to the motion plan to allow the end-effector to grasp the component of interest (element 11, 61, 63; Paragraph [54, 74]).
While Taamazyan teaches the limitations as stated above it does not expressly disclose:
black-and-white cameras
illuminated, alternately, by the single-color light sources of different colors
wherein a respective pseudo-color image is captured when the scene is illuminated by a single-color light source of a corresponding color
concatenating multiple pseudo-color images of different colors in increasing wavelength order
However, Otsuka teaches: black-and-white cameras (Paragraph [53]) … illuminated, alternately, by the single-color light sources of different colors (Figure 2) … wherein a respective pseudo-color image is captured when the scene is illuminated by a single-color light source of a corresponding color (element S100; Figure 3; Figure 4) … concatenating multiple pseudo-color images of different colors in increasing wavelength order (Paragraph [54]))
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the vision guided robotic gripper for grasping an object utilizing light through different filters, filter angles, and configurations of cameras of Taamazyan, to include the monochrome cameras, imaging and image processing method as taught by Otsuka. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: vision guided robotic gripper for grasping an object utilizing alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras.
Regarding claim 2, Taamazyan further teaches: The robotic system of claim 1, wherein the method further comprises compensating for errors in the movement of the end-effector (Paragraph [75-76])
Regarding claim 3, Taamazyan further teaches: The robotic system of claim 2, wherein compensating for errors in the movement of the end-effector comprises applying a machine-learning technique to determine a controller-desired pose corresponding to a camera-instructed pose of the end-effector (Paragraph [45, 50], "In some embodiments, the grasp points may be identified via machine learning based on successes and failures of pick attempts")
Regarding claim 6, Taamazyan further teaches: The robotic system of claim 1, wherein determining the pose further comprises generating a segmentation mask for an image of the scene based on the output of the neural network (element 402, 422; Paragraph [46, 63]).
Regarding claim 7, Taamazyan further teaches: The robotic system of claim 6, wherein the method further comprises: generating a three-dimensional (3D) point cloud of the component of interest (Paragraph [187], "According to these embodiments, the depth map of the object is converted to a point cloud") by overlaying the segmentation mask on the images of the scene with the projected codified light patterns (Paragraph [162-164], "At block 404, the pose estimator 100 engages in object-level correspondence of the objects identified in the segmentation masks… At block 406, the pose estimator 100 generates an output based on the object-level correspondence. The output may be, for example, a measure of disparity or an estimated depth (e.g., distance from the cameras 10, 30) of the object")
Regarding claim 8, Taamazyan further teaches: The robotic system of claim 7, wherein the pose of the component of interest is determined based on the 3D point cloud and a geometric model of the component (Paragraph [187, 147]).
Regarding claim 9, Taamazyan further teaches: The robotic system of claim 6, wherein the neural network comprises a Mask Region-based Convolutional Neural Network (Mask R-CNN) (Paragraph [63, 162])
Regarding claim 11, Taamazyan further teaches: A computer-implemented method for controlling a robotic arm, the method comprising: generating, by a robotic controller, an initial set of instructions to control the robotic arm to move an end-effector towards a component of interest in a work scene (Figure 4A; Paragraph [36]); in response to determining that the end-effector is within a vicinity of the component of interest (Paragraph [79]), configuring a plurality of single-color light sources of different colors to illuminate the work scene (element 43, 16; Paragraph [125, 127]); configuring a structured-light projector to project codified light patterns onto the work scene (Paragraph [118], "embodiments of the present disclosure are not limited thereto and may also include circumstances where one or more active light projector are included in the camera system, thereby forming an active camera system, where the active light projector may be configured to project structured light or a pattern onto the scene"); configuring one or more cameras to capture pseudo-color images of the work scene illuminated by the single-color light sources of different colors and images of the work scene with the projected codified light patterns (element 10, 14); determining a pose of the component of interest based on the pseudo-color images of the work scene and the images of the work scene with the projected codified light patterns (element 100; Figure 6B) wherein determining the pose comprises inputting the images to a neural network (Paragraph [63]) via a set of corresponding input channels (Paragraph [130], “In some embodiments, a demosaicing process is used to compute separate red, green, and blue channels from the raw data”); generating a set of refined instructions (element 9; Paragraph [53, 72]) based on the determined pose of the component and a current pose of the end-effector (Figure 5); and controlling, by the robotic controller, movement of the end-effector according to the set of refined instructions to allow the end-effector to grasp the component of interest (element 11, 61, 63; Paragraph [54, 74]).
While Taamazyan teaches the limitations as stated above it does not expressly disclose:
black-and-white cameras
illuminated, alternately, by the single-color light sources of different colors
wherein a respective pseudo-color image is captured when the scene is illuminated by a single-color light source of a corresponding color
concatenating multiple pseudo-color images of different colors in increasing wavelength order
However, Otsuka teaches: black-and-white cameras (Paragraph [53]) … illuminated, alternately, by the single-color light sources of different colors (Figure 2) … wherein a respective pseudo-color image is captured when the scene is illuminated by a single-color light source of a corresponding color (element S100; Figure 3; Figure 4) … concatenating multiple pseudo-color images of different colors in increasing wavelength order (Paragraph [54]))
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the vision guided robotic gripper for grasping an object utilizing light through different filters, filter angles, and configurations of cameras of Taamazyan, to include the monochrome cameras, imaging and image processing method as taught by Otsuka. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: vision guided robotic gripper for grasping an object utilizing alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras.
Regarding claim 12, Taamazyan further teaches: The method of claim 11, further comprising compensating for errors in the movement of the end-effector (Paragraph [45, 50], "In some embodiments, the grasp points may be identified via machine learning based on successes and failures of pick attempts")
Regarding claim 13, Taamazyan further teaches: The method of claim 12, wherein compensating for errors in the movement of the end-effector comprises applying a machine-learning technique to determine a controller-desired pose corresponding to a camera-instructed pose of the end-effector (Paragraph [45, 50], "In some embodiments, the grasp points may be identified via machine learning based on successes and failures of pick attempts") such that, when the robotic controller controls the movement of the end-effector based on the controller-desired pose, the end-effector achieves, as observed by the cameras, the camera-instructed pose (Paragraph [59]).
Regarding claim 16, Taamazyan further teaches: The method of claim 11, wherein determining the pose further comprises generating a segmentation mask for an image of the work scene based on output of the neural network (element 402, 422; Paragraph [45-46, 63]).
Regarding claim 17, Taamazyan further teaches: The method of claim 16, further comprising generating a three- dimensional (3D) point cloud of the component of interest (Paragraph [187], "According to these embodiments, the depth map of the object is converted to a point cloud") by overlaying the segmentation mask on the images of the scene with the projected codified light patterns (Paragraph [162-164], "At block 404, the pose estimator 100 engages in object-level correspondence of the objects identified in the segmentation masks… At block 406, the pose estimator 100 generates an output based on the object-level correspondence. The output may be, for example, a measure of disparity or an estimated depth (e.g., distance from the cameras 10, 30) of the object")
Regarding claim 18, Taamazyan further teaches: The method of claim 17, wherein the pose of the component of interest is determined based on the 3D point cloud and a geometric model of the component (Paragraph [147])
Regarding claim 19, Taamazyan further teaches: The method of claim 16, wherein the neural network comprises a Mask Region-based Convolutional Neural Network (Mask R-CNN) (Paragraph [63, 162]).
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Taamazyan (US 2022/0405506 A1) in view of Otsuka (US 2017/0132824 A1) and in further view of Shaw (US 8125562 B2).
Regarding claim 5, Taamazyan further teaches: The robotic system of claim 1… wherein colors of the single- color light sources range between ultraviolet and infrared (Paragraph [119, 133])
While Taamazyan and Otsuka teach the limitations as stated above it does not expressly disclose:
the single-color light sources comprise light-emitting diodes (LEDs)
However, Shaw teaches: wherein the single-color light sources comprise light-emitting diodes (LEDs) (Figure 8).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the vision guided robotic gripper for grasping an object utilizing alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras of Taamazyan and Otsuka, to include the LED’s of individual colors as taught by Shaw. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: vision guided robotic gripper for grasping an object utilizing LED’s colors, alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras.
Regarding claim 15, Taamazyan further teaches: The method of claim 11… wherein colors of the single- color light sources range between ultraviolet and infrared (Paragraph [119, 133])
While Taamazyan teaches the limitations as stated above it does not expressly disclose:
the single-color light sources comprise light-emitting diodes (LEDs)
However, Shaw teaches: wherein the single-color light sources comprise light-emitting diodes (LEDs) (Figure 8).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method for the vision guided robotic gripper for grasping an object utilizing alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras of Taamazyan and Otsuka, to include the LED’s of individual colors as taught by Shaw. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: vision guided robotic gripper for grasping an object utilizing LED’s colors, alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras.
Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Taamazyan (US 2022/0405506 A1) in view of Otsuka (US 2017/0132824 A1) and in further view Song (US 20230298189 A1).
Regarding claim 10, while Song further teaches the limitation set forth above according to claim 1, (rejected base claim 1), including a robot arm that grasps a component of interest using cameras, it does not expressly disclose:
the codified light patterns are encoded based on maximum min-SW gray codes
However, Song teaches: The robotic system of claim 1, wherein the codified light patterns are encoded based on maximum min-SW gray codes (Figure 4-5B).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the vision guided robotic gripper for grasping an object utilizing alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras of Taamazyan and Otsuka, to include the gray code structures pattern as taught by Song. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a vision guided robotic gripper for grasping an object utilizing alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras, and light projectors to project gray code structures patterns on a scene.
Regarding claim 20, while Song further teaches the limitation set forth above according to claim 11, (rejected base claim 11), including a method for a vision guided robotic gripper for grasping an object, it does not expressly disclose:
However, Song teaches: The method of claim 11, wherein the codified light patterns are encoded based on maximum min-SW gray codes (Figure 4-5B).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method for a vision guided robotic gripper for grasping an object utilizing alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras of Taamazyan and Otsuka, to include the gray code structures pattern as taught by Song. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a method for a vision guided robotic gripper for grasping an object utilizing alternating wavelength band imaging and processing, filters, filter angles, and configurations of monochrome cameras, and light projectors to project gray code structures patterns on a scene.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSE TRAMANH TRAN whose telephone number is (703)756-5879. The examiner can normally be reached M-F 8:30am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.T.T./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656