DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is a First Action Non-Final on the merits. Claims 1-20 as originally filed on October 30, 2023, are currently pending and have been considered below.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/30/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites method for bin picking.
Step 2A – Prong 1
Independent Claims 1 and 13-14 as a whole recite a method of organizing human activity. The limitations from exemplary Claim 1 reciting “method for executing autonomous bin picking, comprising: capturing one or more of a physical environment comprising a plurality of objects placed in a bin, based on a captured first image, generating a first output by an object detection localizing one or more objects of interest in the first image, based on a captured second image, generating a second output by a grasp detection defining a plurality of grasping alternatives that correspond to a plurality of locations in the second image, combining at least the first and second outputs by a high-level fusion (HLSF) to compute attributes for each of the grasping alternatives, the attributes including functional relationships between the grasping alternatives and detected objects, ranking the grasping alternatives based on the computed attributes by a multi-criteria decision making (MCDM) to select one of the grasping alternatives for execution, and operating a to selectively grasp an object from the bin by generating executable instructions based on the selected grasping alternative” is a method of managing interactions between people, which falls into the certain methods of organizing human activity grouping. The mere recitation of a generic computer (images, sensor, module, controllable device of claims 1 and 14; controllable device, end effector, sensors, image, computing system, processors, memory, module of claim 13) does not take the claim out of the methods of organizing human activity grouping. Thus, the claim recites an abstract idea.
Step 2A - Prong 2: Claims 1-20 and their underlining limitations, steps, features and terms, are further inspected by the Examiner under the current examining guidelines, and found, both individually and as a whole, not to include additional elements that are sufficient to integrate the abstract idea into a practical application. The limitations are directed to limitations referenced in MPEP 2106.05 that are not enough to integrate the abstract idea into a practical application. Limitations that are not enough include, as a non-limiting or non-exclusive examples, such as: (i) adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions, (ii) insignificant extra solution activity, and/or (iii) generally linking the use of the judicial exception to a particular technological environment or field of use.
This judicial exception is not integrated into a practical application because the claim recites the additional elements of (images, module, controllable device of claims 1 and 14; controllable device, end effector, sensors, image, computing system, processors, memory, module of claim 13). The images, module, controllable device of claims 1 and 14; controllable device, end effector, sensors, image, computing system, processors, memory, module of claim 13, are recited at a high level of generality and are generically recited computer elements. The generically recited computer elements amount to simply implementing the abstract idea on a computer. The combination of these additional elements are additional elements do no more than generally link the use of the judicial exception to a particular technological environment or field of use. Accordingly, in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
The claim do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above, the additional elements do no more than generally link the use of the judicial exception to a particular technological environment or field of use. Thus, even when viewed as an ordered combination, nothing in the claims add significantly more (i.e. an inventive concept) to the abstract idea. The claims are ineligible.
Dependent claims 2-12 and 15-20 are also directed to same grouping of methods of organizing human activity. The additional elements of the images in claims 2-7, 15-16; module in claims 4-11, 15-19; sensors in 5, 7 and 16; computing system in claims 12 and 20; neural network in claims 4-7 and 15-16; RGB color image in claims 2, are additional elements do no more than generally link the use of the judicial exception to a particular technological environment or field of use. Accordingly, in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4 and 6-20 are rejected under 35 U.S.C. 102 (a)(2) as being anticipated by Wang et al (GraspFusionNet: a two-stage multi-parameter grasp detection network based on RGB-XYZ fusion in dense clutter - hereinafter Wang).
Re. claim 1 and 13-14, Wang discloses:
A method for executing autonomous bin picking, comprising:
capturing one or more images of a physical environment comprising a plurality of objects placed in a bin, [Wang; Fig. 5 see “tote”].
based on a captured first image, generating a first output by an object detection mod- ule localizing one or more objects of interest in the first image, [Wang; Section 4.1, 4.2, 4.3: gRPN; “gRPN takes only color image as input and outputs the same size grasp proposal map consisting of graspable and nongraspable regions.” together with 2D CNN of Section 4.3, figure 4 and also second paragraph of Section 7: "replace gRPN with object detection or object segmentation"; figure 6(e) known object set].
based on a captured second image, generating a second output by a grasp detection module defining a plurality of grasping alternatives that correspond to a plurality of locations in the second image, [Wang; Sections 4.1, 4.3; "gPPN is proposed to focus on grasp parameters prediction by multi- modal fusion. GPPN takes RGB-D patches as input based on grasp proposal region and outputs grasp quality, grasp angle, grasp width and grasp depth simultaneously."; the additional depth information is considered as second image; gPPN is furthermore based on the results of gRPN; considering 3D ShuffleNetV2 processing: "To learn grasp-related geometric features from XYZ heightmaps, we repeat XYZ three times to generate three-channel stereo geometry input."].
combining at least the first and second outputs by a high-level sensor fusion (HLSF) module to compute attributes for each of the grasping alternatives, the attributes including functional relationships between the grasping alternatives and detected objects, [Wang; Section 4.3: "The color features and geometric features are concatenated together and fed into grasp prediction network."].
ranking the grasping alternatives based on the computed attributes by a multi-criteria decision making (MCDM) module to select one of the grasping alternatives for execution, and [Wang; section 4.4].
operating a controllable device to selectively grasp an object from the bin by generating executable instructions based on the selected grasping alternative. [Wang; Section 6.5].
Re. claim 2, Wang further discloses:
wherein the first image defines a RGB color image. [Wang; Sections 4.1 and 5.1].
Re. claim 3, Wang further discloses:
wherein the second image defines a depth map of the physical environment. [Wang; Sections 4.1 and 5.1].
Re. claim 4, Wang further discloses:
wherein the object detection module comprises a first neural network, the first neural net- work trained to predict, in the first image, contours or bounding boxes representing identified objects and class labels for each identified object. [Wang; Sections 4.2-4.3 and Section 7].
Re. claim 6 and 15, Wang further discloses:
wherein the grasp detection module comprises a second neural network, the second neural network trained to produce an output vector that includes a plurality of predicted grasp scores associated with various locations in the second image, the grasp scores indicating a quality of grasp at the respective location, each location representative of a grasping alternative. [Wang; Section 4.3].
Re. claim 7 and 16, Wang further discloses:
comprising utilizing multiple second neural networks or multiple instances of a single second neural network that are pro- vided with different second images captured by different sensors, to generate multiple second outputs, wherein the HLSF module combines the multiple second outputs to compute the at- tributes for each of the grasping alternatives. [Roha; ¶60].
Re. claim 8 and 17, Wang further discloses:
comprising: aligning the first and second outputs to a common coordinate system by the HLSF module to generate a coherent representation of the physical environment, and computing, by the HLSF module, for each location in the coherent representation, a probabilistic value for the presence an object of interest and a quality of grasp. [Wang; Section 4.1-4.3, “XYZ representation, Section 7 also discloses use of object detection or object segmentation. Also Section 1 shows comparing example for computation of a probabilistic value for presence of an object and quality of grasp].
Re. claim 9, Wang further discloses:
wherein the attributes computed by the HLSF module comprise, for each grasping alternative, a quality of grasp and an affiliation of that grasping alternative to an object of interest. [Wang; Sections 6.5-6.5].
Re. claim 10 and 18, Wang further discloses:
wherein the ranking of the grasping alternatives by the MCDM module is based on multiple criteria that are mapped to the attributes and a respective weight assigned to each criterion, the weights being determined based on a specified bin picking objective and one or more specified constraints. [Wang; Sections 4.4, 6.2 "alpha is a hyper-parameter that balances loss components and is determined by experiment"; in Section 6.2 results are evaluated and parameter alpha adjusted].
Re. claim 11 and 19, Wang further discloses:
comprising assigning an initial weight to each of the criteria of the multi-criteria decision module and subsequently adjusting the weights based on feedback from simulation or real-world execution of consecutive instances of the autonomous bin picking. [Wang; Sections 4.1-4.4, 6.5, 7 and Figs. 3-16, Sections 4.4, 6.2 "alpha is a hyper-parameter that balances loss components and is determined by experiment"; in Section 6.2 results are evaluated and parameter alpha adjusted].
Re. claim 12 and 20, Wang further discloses:
A non-transitory computer-readable storage medium including instructions that, when processed by a computing system, configure the computing system to perform the method. [Wang; Sections 4.1-4.4, 6.5, 7 and Figs. 3-16].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating
obviousness or nonobviousness.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Salvat in view of Wang in view of Rohaninejad et al (WIPO No. WO 2021046530 A1 - hereinafter Roha).
Re. claim 5, Wang teaches the method of Claim 4.
Wang doesn’t teach, Roha teaches:
comprising utilizing multiple first neural networks or multiple instances of a single first neural network that are provided with different first images captured by different sensors, to generate multiple first outputs, wherein the HLSF module combines the multiple first outputs to compute the attributes for each of the grasping alternatives. [Roha; ¶60].
It would have been obvious to one of ordinary skill in the art before the effective filing date to include limitation(s) as taught by Roha in the system of Wang, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM EL-BATHY whose telephone number is (571)272-7545. The examiner can normally be reached Monday - Friday 9am - 7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Resha Desai can be reached at 571-270-7792. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/IBRAHIM N EL-BATHY/Primary Examiner, Art Unit 3628