Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The abstract of the disclosure is objected to because it exceeds 150 words in length. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: “CONTROL METHOD TO PERFORM AN OPERATION BASED ON OBJECT DETECTION AND INSTRUCTION INFORMATION”
Claim Objections
Claim 9 is objected to because of the following informalities:
Claim 9 recites the limitation "output control information indicating a control content for the control target to the control target as the process to control the control target", which is unclear and confusing.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 5-6, 9-11, 15-17 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ito (US 20240367314 A1).
Regarding claim 1, Ito discloses a controller that controls a control target configured to perform a specified operation, the controller comprising: a storage that stores data to control the control target; and a computer that executes a process to control the control target (Fig. 2, Fig. 5, [0107]-[0109] a computer device (i.e. controller) including a memory storage and a CPU; [0042]-[0043] where the device controls a robot to perform an operation such as grasping an object), wherein the computer is configured to:
acquire environment information indicating an observation result of an environment in which a target object to be operated by the control target is located (Fig. 2, [0046] a sensor measures environment information such as an image of the surroundings of the robot; [0055] the captured image including an object to be operated on by the robot); and
acquire instruction information indicating an instruction from a user to the control target, the instruction including an instruction related to the target object ([0028] the target operation is given to the robot based on instruction information according to user input; Fig. 2, [0049] the instruction data includes operations of the robot interacting with a target object (e.g. "pick up red object")), the computer includes:
a first inference unit that generates object information indicating the target object based on the acquired environment information by using a first inference model, the first inference model being configured to generate information indicating an arbitrary object based on information indicating an observation result of an environment in which the arbitrary object is located (Fig. 3, [0069] the object recognition unit recognizes a shape and a position of the object that is the target of the target operation of the robot from the captured image; [0067] the object recognition unit may be implemented using a machine learning method);, and
a second inference unit that generates operation information specifying an operation of the control target which includes an operation with respect to the target object based on the acquired instruction information and the object information by using a second inference model, the second inference model being configured to generate information specifying an operation of the control target which includes an operation with respect to an object based on information related to the object and information indicating an instruction related to the control target which includes an instruction related to the object (Fig. 3, [0099]-[0100] the instruction learning unit can generate an instruction corresponding to the operation of the robot, a prediction command (i.e. operation information) is output based on the output of the instruction learning unit; [0097] wherein the instruction learning unit includes a network defined by a model), and
the computer is configured to execute a process to control the control target based on the operation information (Fig. 8, [0131], [0133] the robot is operated to perform the target operation based on the prediction command).
Regarding claim 2, Ito discloses the controller according to claim 1 as applied above. Ito further discloses the second inference model is configured to generate information specifying an operation of the control target which includes an operation with respect to the arbitrary object based on information indicating the arbitrary object as the information related to the object ([0100]-[0101] the instruction learning unit outputs information related to the target operation such that the input instruction information and the target operation of the robot can be linked (i.e. to perform the target operation on a real object in the image based on user input) ).
Regarding claim 3, Ito discloses the controller according to claim 2 as applied above. Ito further discloses the operation information includes a natural language sentence specifying an operation of the control target ([0100] the instruction learning unit 412 can output the language instruction of “pick up red object” as the prediction instruction information 4121).
Regarding claim 5, Ito discloses the controller according to claim 1 as applied above. Ito further discloses the object information includes attribute information indicating an attribute of the target object ([0069] the object recognition unit recognizes the position and the shape of the object).
Regarding claim 6, Ito discloses the controller according to claim 5 as applied above. Ito further discloses the attribute information includes at least one of a shape, a weight, a friction coefficient, a center of gravity, an inertia moment, and a rigidity of the target object ([0069] the object recognition unit recognizes the position and the shape of the object).
Regarding claim 9, Ito discloses the controller according to claim 1 as applied above. Ito further discloses output control information indicating a control content for the control target to the control target as the process to control the control target (Fig. 3, [0099]-[0100] the instruction learning unit can generate an instruction corresponding to the operation of the robot, a prediction command (i.e. operation information) is output based on the output of the instruction learning unit; Fig. 8, [0131], [0133] the robot is operated to perform the target operation based on the prediction command);
determine whether or not an operation of the control target according to the control information is correct (Fig. 8, [0134] the inference unit determines based on sensor data whether the target operation of the robot indicated by the operation command value is completed); and
generate new operation information specifying an operation of the control target when the operation of the control target is not correct ([0135] when the target operation is not completed, a new operation (e.g. stretch the arm more) is performed in order to accomplish the goal of operating on the control target).
Regarding claim 10, Ito discloses the controller according to claim 9 as applied above. Ito further discloses the storage is configured to accumulatively store the object information, the operation information, and an operation result of the control target, and the computer is configured to generate the new operation information based on information stored in the storage ([0043] information processing device includes a memory to store data necessary for the CPU to perform processing using a program; [0052] the information processing device may be the learning device as in Fig. 2; [0135] when the target operation is not completed, a new operation (e.g. stretch the arm more) is generated).
Regarding claim 11, Ito discloses the controller according to claim 9 as applied above. Ito further discloses the computer further includes a fourth inference unit that determines whether or not an operation of the control target is correct based on the instruction information, the control information, and an operation result of the control target according to the control information by using a fourth inference model, the fourth inference model being configured to determine whether or not the operation is correct based on information indicating a control instruction to the control target, information indicating a control content for the control target, and information indicating an operation result of the control target (Fig. 2, [0134]-[0136] the inference unit 44 determines based on the data whether the target operation of the robot indicated by the command is completed; [0059] the inference unit uses a weighted model to control the robot until a target operation is completed).
Regarding claim 15, Ito discloses the controller according to claim 1 as applied above. Ito further discloses the storage is configured to store at least one of the first inference model and the second inference model ([0043] information processing device includes a memory to store data necessary for the CPU to perform processing using a program; [0052] the information processing device may be the learning device as in Fig. 2 (i.e. including the models)).
Regarding claim 16, Ito discloses everything claimed as applied above (see rejection of claim 1) in addition to a control method of controlling a control target configured to perform a specified operation, the control method comprising: as a process to be executed by a computer (Fig. 2, Fig. 5, [0107]-[0109] a computer device (i.e. controller) including a memory storage and a CPU; [0042]-[0043] where the device controls a robot to perform an operation such as grasping an object).
Regarding claim 17, Ito discloses everything claimed as applied above (see rejection of claim 1) in addition to a control system that controls a control target configured to perform a specified operation, the control system comprising: a controller that controls the control target; and a server communicably connected to the controller, wherein the server includes a computer that executes a process to cause the controller to control the control target, the computer is configured to (Fig. 2, Fig. 5, [0107]-[0109] a computer device (i.e. controller) including a memory storage and a CPU which controls operations of the robot via the system bus; [0042]-[0043] where the device controls a robot to perform an operation such as grasping an object).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 7-8, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito (US 20240367314 A1) in view of Liu (CN 111145257 B).
Regarding claim 7, Ito discloses the controller according to claim 1 as applied above. Ito further discloses wherein the environment information includes an environment image obtained by photographing an environment in which the target object is located (Fig. 2, [0046] a sensor measures environment information such as an image of the surroundings of the robot; [0055] the captured image including an object to be operated on by the robot),
the computer is configured to execute a process to control the control target based on the operation information and the position information (Fig. 8, [0131], [0133] the robot is operated to perform the target operation based on the prediction command).
Ito fails to disclose the computer further includes a third inference unit that acquires position information of the target object in the environment image based on the environment image by using a third inference model, the third inference model being configured to segment an object included in an image based on the image including the object.
Liu, in a related system from the same field of endeavor of controlling robots to perform object-related tasks ([n0001], [n0002]), discloses a third inference unit that acquires position information of the target object in the environment image based on the environment image by using a third inference model, the third inference model being configured to segment an object included in an image based on the image including the object ([n0034] the image segmentation unit is used to segment the image corresponding to the set of items to be grabbed into regions to obtain the image corresponding to each item; [n0083] segmenting from an image taken by a camera of a field of view around the robot; [n0095]-[n0096] obtain location information of target object based on the image region segmentation).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Liu with Ito and acquire position information of the target object in the environment image based on the environment image by using a third inference model, the third inference model being configured to segment an object included in an image based on the image including the object, as disclosed by Liu, as part of a controller that controls a control target configured to perform a specified operation, as disclosed by Ito, for the purpose of improving accuracy of item grasping and completing intelligent and personalized grasping functions (See Liu: [n103], [n0053], [n0072]).
Regarding claim 8, Ito in view of Liu discloses the controller according to claim 7 as applied above. Ito fails to disclose the third inference model is configured to segment the arbitrary object included in an image based on the image including the arbitrary object.
Liu, in a related system from the same field of endeavor of controlling robots to perform object-related tasks ([n0001], [n0002]), discloses the third inference model is configured to segment the arbitrary object included in an image based on the image including the arbitrary object ([n0034] the image segmentation unit is used to segment the image corresponding to the set of items to be grabbed into regions to obtain the image corresponding to each item; [n0083] segmenting from an image taken by a camera of a field of view around the robot).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Liu with Ito and segment the arbitrary object included in an image based on the image including the arbitrary object, as disclosed by Liu, as part of a controller that controls a control target configured to perform a specified operation, as disclosed by Ito, for the purpose of improving accuracy of item grasping and completing intelligent and personalized grasping functions (See Liu: [n103], [n0053], [n0072]).
Regarding claim 14, Ito in view of Liu discloses the controller according to claim 7 as applied above. Ito fails to disclose the computer is configured to generate, by using the first inference model, the object information based on a segmentation result of the third inference model.
Liu, in a related system from the same field of endeavor of controlling robots to perform object-related tasks ([n0001], [n0002]), discloses the computer is configured to generate, by using the first inference model, the object information based on a segmentation result of the third inference model ([n0083]-[n0084], [n0092] Feature extraction is performed on the image corresponding to each item to obtain the feature information of each item).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Liu with Ito and generate object information based on a segmentation result, as disclosed by Liu, as part of a controller that controls a control target configured to perform a specified operation, as disclosed by Ito, for the purpose of improving accuracy of item grasping and completing intelligent and personalized grasping functions (See Liu: [n103], [n0053], [n0072]).
Claim(s) 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ito (US 20240367314 A1) in view of Itou (US 20240208047 A1).
Regarding claim 12, Ito discloses the controller according to claim 11 as applied above. Ito fails to disclose wherein the fourth inference model is configured to generate a natural language sentence indicating a determination result regarding whether or not the operation of the control target is correct.
Itou, in a related system from the same field of endeavor of object recognition relating to tasks to be performed by a robot system (Abstract), discloses generating a natural language sentence indicating a determination result regarding whether or not the operation of the control target is correct (Fig. 10, [0097] the display control unit 15 displays text information 78A notifying the operator of successful acceptance of the correction, the establishment of the operation plan, and the starting of the robot control).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Itou with Ito and generate a natural language sentence indicating a determination result regarding whether or not the operation of the control target is correct, as disclosed by Itou, as part of a controller that controls a control target configured to perform a specified operation, as disclosed by Ito, for the purpose of acquiring an accurate recognition result of an object relating to a task and to establish an accurate operation plan of a robot (See Itou: [0086], [0005]).
Regarding claim 13, to discloses the controller according to claim 9 as applied above. Ito fails to disclose to notify the user that the operation of the control target is not correct when the operation of the control target is not correct: and generate the new operation information according to an instruction from the user.
Itou, in a related system from the same field of endeavor of object recognition relating to tasks to be performed by a robot system (Abstract), discloses to notify the user that the operation of the control target is not correct when the operation of the control target is not correct: and generate the new operation information according to an instruction from the user (Fig. 10, [0162]-[0163] when a correction to the robot operation is necessary, receive correction from an operator using a user input device (i.e. notify the operator such they will input correction information) and implement the correction according to the input).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to combine Itou with Ito and notify the user that the operation of the control target is not correct when the operation of the control target is not correct: and generate the new operation information according to an instruction from the user, as disclosed by Itou, as part of a controller that controls a control target configured to perform a specified operation, as disclosed by Ito, for the purpose of acquiring an accurate recognition result of an object relating to a task and to establish an accurate operation plan of a robot (See Itou: [0086], [0005]).
Allowable Subject Matter
Claim 4 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 4, Ito discloses the controller according to claim 1 as applied above.
However, Ito fails to disclose wherein the object information includes a natural language sentence indicating the target object.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hegde (US 20230321694 A1) discloses controlling a robot to perform an operation such as grasping an object, object detection based on imaging and notifying a human operator of a need for intervention.
Li (Li Z, Mu Y, Sun Z, Song S, Su J, Zhang J. Intention Understanding in Human-Robot Interaction Based on Visual-NLP Semantics. Front Neurorobot. 2021 Feb 2;14:610139.) discloses a robot understanding natural language user input instruction information for the purposes of object identification and performing an operation including grasping an object, and may also include.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLINE DEPALMA whose telephone number is (571)270-0769. The examiner can normally be reached Mon-Thurs 9:00am-4pm Eastern Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at 571-272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAROLINE E. DEPALMA/Examiner, Art Unit 2675
/SJ Park/Primary Examiner, Art Unit 2675