DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This is in response to Applicant’s communication filed on 11/12/24, wherein
Claims 1-20 are currently pending.
Claim Rejections - 35 USC § 112
Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites in preamble “a task execution method, applicable to an intelligent robot”. However, the body of claim is silent with respect to the intelligent robot. Therefore, it is not clear whether the feature in second step “a holding apparatus “ have any relationship with an intelligent robot as recited in the preamble. For the purpose of examination, “a holding apparatus” is part of the intelligent robot.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by OTA (US 2012/0072023).
As for claim 1, OTA discloses a task execution method, applicable to an intelligent robot, the method comprising: displaying acquired environment image information {see at least figures 1, 3, pars. 0004, 0022, 0024, 0033 which discloses the environment image information is captured from camera 165 of robot 160, and transmit the image to the human robot interface apparatus (HRI 110, 210) and is displayed on the display interface 112, 212}; and controlling a holding apparatus to execute a task corresponding to an operation of a user on the environment image information, in response to the operation of the user on the environment image information {see at least figures 3, 4, 5 at least pars. 0033, 0037-0038, 0049 which discloses controlling the robot to execute a task (e.g. retrieve the soda can target object 154 from the counter 194 and deliver it to a table 150 located near the user) in response to the operation of the user on the environment image information (e.g. par. 0033 discloses the user may select an object that he or she would like the robot 160 to perform a task; the user may use touch screen or mouse to draw a box 122 over an object 124 to select a specified portion of the two dimensional image containing the object for manipulation).
As for claim 2, OTA discloses wherein the method further comprises: performing instance segmentation on the environment image information to obtain at least one first instance corresponding to at least one object comprised in the environment image information {see at least figures 1, 3 and at least par. 0033 e.g. the user may use a mouse or the touch-screen to draw a box 122 over an object 124 to select a specified portion of the two dimensional image containing the object for manipulation (which is interpreted to be instance segmentation). The processor may then receive selected object data that corresponds with the selected object 124 and then attempt to automatically recognize a registered object shape pattern that is associated with the selected object 124. For example, if the selected object 124 is a soda can as illustrated in FIG. 1, and a soda can was previously registered as a registered object shape pattern, the HRI 110 may automatically recognize the selected object 124 as a soda can and retrieve the appropriate cylindrical registered object shape pattern from the object recognition support tool library}; displaying the at least one first instance {see at least figures 1, 3}; in response to an operation of the user on any first instance in the at least one first instance, determining a target instance that is operated on and a task to be executed for the target instance {see at least figures 1, 3, pars. 0033, 0049}; and the controlling the holding apparatus to execute the task corresponding to the operation comprises: controlling, based on the operation, the holding apparatus to execute the task to be executed for the target instance {see at least figures 1, 3, and pars. 0033, 0049}.
As for claim 3, OTA discloses wherein the method further comprises: for any object in the at least one object, obtaining an instruction from the user for performing instance segmentation on the object, and performing instance segmentation on the object to obtain at least one second instance corresponding to the object; and using the at least one second instance as a first instance corresponding to the object {see at least figure 1, par. 0033}.
As for claim 4, OTA discloses wherein the operation of the user on the target instance comprises any one or more of the following operations: an operation of the user to move the target instance; an operation of the user to rotate the target instance; and an operation of the user on the target instance itself {see at least figure 3 and par. 0033}.
As for claim 5, OTA discloses wherein the operation of the user on the target instance is implemented in any one or more of the following manners: clicking the target instance, and sliding the target instance {see at least figure 3, pars. 0015, 0031-0033}.
As for claim 6, OTA discloses determining, for each first instance in the at least one first instance, a three-dimensional model corresponding to the first instance {see at least pars. 0024, 0033} ; and the displaying the at least one first instance comprises: displaying at least one three-dimensional model corresponding to the at least one first instance {see at least figure 3}.
As for claim 7, OTA discloses wherein the controlling, based on the operation, the holding apparatus to execute the task to be executed for the target instance comprises: determining, from the environment image information, an object to be held corresponding to the target instance {see at least figure 3, 5 and par. 0033}; determining, based on the operation, movement parameter information of the object to be held; and controlling the holding apparatus to execute the task to be executed for the target instance based on the movement parameter information {see at least figure 5, pars. 0049-0050}.
As for claim 8, OTA discloses wherein the determining, based on the operation, the movement parameter information of the object to be held comprises: when the operation is an operation of the user to move the target instance, obtaining an initial position of the target instance; obtaining a target position to which the target instance is moved; and determining the movement parameter information of the object to be held based on the initial position and the target position {see at least figures 3, 5, pars. 0033, 0049-0050}.
As for claim 9, OTA discloses wherein the determining the movement parameter information of the object to be held based on the initial position and the target position comprises: determining a movement trajectory of the holding apparatus based on the initial position and the target position; and determining the movement parameter information of the object to be held based on the movement trajectory {see at least figure 5, pars. 0049-0050}.
As for claims 10-20, the limitations of these claims have been noted in the rejection above. They are therefore rejected for the same reason set forth the rejected claims above. Further OTA discloses an electronic device comprising a processor; and a memory {see at least figures 1-2}.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lection et al (US 2018/0043532): assigning tasks to a robot device by a processor. A target region may be selected from a displayed image of an image capturing device. One or more tasks may be defined according to a plurality of objects displayed within the target region such that the defined one or more tasks are arranged according to a task workflow.
Takata (US 2023/0028730): An artificial intelligence system includes: a storage configured to previously store a data model; a generator configured to extract the data model from the storage and generate a human object capable of reproducing a motion and a thought of a human;
Nakanishi et al (US 2024/0269826): information processing apparatus for robot teaching, robot system for robot teaching, method for generating trajectory of specific part of robot, method for controlling robot in robot teaching, recording medium and movable object.
Banerjee et al (US 2023/0162494): System and method for ontology guided indoor scene understanding for cognitive robotic tasks.
Song (US 11,559,902): a controller configured to output an operation signal that enables the robot to operate when an input is generated through a touch screen.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kira Nguyen whose telephone number is (571)270-1614. The examiner can normally be reached on Monday to Friday 9:00-5:00 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached on 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KIRA NGUYEN/Primary Examiner, Art Unit 3656