DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. US 11,931,894 B1 (hereinafter referred to as “the ‘894 patent”). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the ‘894 patent anticipate the claims of the instant application.
Regarding claim 1, the ‘894 patent recites a method of operating a robot system including a robot body, the method comprising: capturing, by at least one sensor of the robot system, sensor data representing information about an environment of the robot body (column 25, lines 33-35); generating, by at least one processor of the robot system, a natural language (NL) statement comprising (column 25, line 36 and 41-43): a NL description of at least one action performable by the robot body (column 25, lines 42-43); and a NL description of a work objective (column 25, lines 41-42); providing a query to a robot control module, the robot control module including a large language model (LLM) module, wherein the query includes at least a portion of the sensor data representing information about an environment of the robot body, the NL statement, and a request for a selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective (column 25, lines 33-35, 39-44, 47-50); receiving the selection of at least one action from the robot control module (column 25, lines 45-46 and 49-50); and executing the at least one action by the robot system (column 25, line 47).
Regarding claim 2, the ‘894 patent recites the method of claim 1, further comprising: generating, by the at least one processor, at least one robot-language action based on the selection of at least one action, the robot-language action comprising robot control instructions which when executed by the at least one processor cause the robot system to perform the at least one action (column 25, lines 48-57).
Regarding claim 3, the ‘894 patent recites the method of claim 1, further comprising: generating, by at least one processor of the robot system, a NL description of at least one aspect of the environment based on the sensor data, wherein the at least a portion of the sensor data representing information about an environment of the robot body included in the query includes the NL description of at least one aspect of the environment based on the sensor data (column 25, lines 36-41).
Regarding claim 4, the ‘894 patent recites the method of claim 1 wherein capturing, by at least one sensor of the robot system, sensor data representing information about an environment of the robot body includes capturing, by at least one image sensor of the robot system, image data of the environment of the robot body (column 25, lines 33-35).
Regarding claim 5, the ‘894 patent recites the method of claim 1 wherein the request for the selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective includes a request for a task plan that will complete the work objective (column 25, line 44).
Regarding claim 6, the ‘894 patent recites the method of claim 1 wherein receiving the selection of at least one action from the robot control module includes receiving the selection of at least one action expressed in NL from the robot control module (column 25, lines 49-50 and 53).
Regarding claim 7, the ‘894 patent recites the method of claim 6, further comprising converting, by the at least one processor, the selection of at least one action expressed in NL to at least one robot-language action, each robot-language action comprising robot control instructions which when executed by the at least one processor cause the robot system to perform a respective action (column 25, lines 58-63).
Regarding claim 8, the ‘894 patent recites the method of claim 7, wherein converting, by the at least one processor, the selection of at least one action expressed in NL to at least one robot-language action comprises executing a robot-language conversion module which converts the at least one action performable by the robot system as expressed in NL to at least one reusable work primitive in an instruction set executable by the robot system (column 25, lines 58-63).
Regarding claim 9, the ‘894 patent recites the method of claim 1, further comprising, prior to executing the at least one action, comparing the at least one action to a set of rules specified in at least part of a reasoning engine to validate that the at least one action will at least partially complete the work objective (column 26, lines 26-29).
Regarding claim 10, the ‘894 patent recites a robot control module comprising at least one non-transitory processor-readable storage medium storing a large language model (LLM) module and processor-executable instructions or data that, when executed by at least one processor of a robot system (column 26, lines 30-34), cause the robot system to: capture sensor data representing information about an environment of a robot body (column 26, lines 35-37); generate a natural language (NL) statement (column 26, lines 38, 43-46 and 50-53) comprising: a NL description of at least one action performable by the robot body (column 26, lines 44-46 and 50-53); and a NL description of a work objective (column 26, lines 43-44); generate a query, wherein the query includes at least a portion of the sensor data representing information about an environment of the robot body, the NL statement, and a request for a selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective (column 26, lines 35-37, 41-46 and 49-53); determine the selection of at least one action from the robot control module (column 26, line 47-48 and 51-53); and execute the at least one action by the robot system (column 26, line 49).
Regarding claim 11, the ‘894 patent recites the robot control module of claim 10 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to: generate at least one robot-language action based on the selection of at least one action, the robot-language action comprising robot control instructions which when executed by the at least one processor cause the robot system to perform the at least one action (column 26, lines 50-61).
Regarding claim 12, the ‘894 patent recites the robot control module of claim 10 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to: generate a NL description of at least one aspect of the environment based on the sensor data, wherein the at least a portion of the sensor data representing information about an environment of the robot body included in the query includes the NL description of at least one aspect of the environment based on the sensor data (column 26, lines 38-46).
Regarding claim 13, the ‘894 patent recites the robot control module of claim 10 wherein the processor-executable instructions or data that, when executed by at least one processor of the robot system, cause the robot system to capture sensor data representing information about an environment of the robot body, cause at least one image sensor of the robot system to capture image data of the environment of the robot body (column 26, lines 35-37).
Regarding claim 14, the ‘894 patent recites the robot control module of claim 10 wherein the request for the selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective includes a request for a task plan that will complete the work objective (column 26, line 46).
Regarding claim 15, the ‘894 patent recites the robot control module of claim 10 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to compare the at least one action to a set of rules specified in at least part of a reasoning engine to validate that the at least one action will at least partially complete the work objective (column 27, lines 42-46).
Regarding claim 16, the ‘894 patent recites a robot system comprising: a robot body (column 27, line 48); at least one sensor (column 27, line 49); a robot controller including at least one processor and a robot control module comprising at least one non-transitory processor-readable storage medium, the at least one non-transitory processor-readable storage medium storing processor-executable instructions which when executed by the at least one processor cause the robot system to (column 27, line 50 through column 28, line 3): capture sensor data representing information about an environment of a robot body (column 28, line 4-6); generate a natural language (NL) statement (column 28, line 7-9, 12-14) comprising: a NL description of at least one action performable by the robot body (column 28, line 13-14); and a NL description of a work objective (column 28, line 12-13); generate a query, wherein the query includes at least a portion of the sensor data representing information about an environment of the robot body, the NL statement, and a request for a selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective (column 28, lines 4-6, 10-15 and 18); determine the selection of at least one action from the robot control module (column 28, line 16-17 and 42-43); and execute the at least one action by the robot system (column 28, line 18).
Regarding claim 17, the ‘894 patent recites the robot system of claim 16 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to: generate at least one robot-language action based on the selection of at least one action, the robot-language action comprising robot control instructions which when executed by the at least one processor cause the robot system to perform the at least one action (column 27, lines 41-51).
Regarding claim 18, the ‘894 patent recites the robot system of claim 16 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to: generate a NL description of at least one aspect of the environment based on the sensor data, wherein the at least a portion of the sensor data representing information about an environment of the robot body included in the query includes the NL description of at least one aspect of the environment based on the sensor data (column 27, lines 4-6 and 10-15).
Regarding claim 19, the ‘894 patent recites the robot system of claim 16 wherein the processor-executable instructions or data that, when executed by at least one processor of the robot system, cause the robot system to capture sensor data representing information about an environment of the robot body, cause at least one image sensor of the robot system to capture image data of the environment of the robot body (column 27, lines 4-6).
Regarding claim 20, the ‘894 patent recites the robot system of claim 16 wherein the request for the selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective includes a request for a task plan that will complete the work objective (column 28, line 15).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 9 and 15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 9, the limitation referring to “comparing the at least one action” on line 2 of claim 9 renders the claim indefinite. That is, the claim is indefinite because it is unclear whether the words “the at least one action” refer to the “at least one action performable by the robot body” recited on line 6 of claim 1 or whether the words refer to the “at least one action” recited on line 13 of claim 1. Accordingly, the claim is indefinite because the metes and bounds of the claim are unclear.
Regarding claim 15, the limitation reciting “compare the at least one action” on line 3 of claim 15 renders the claim indefinite. That is, the claim is indefinite because it is unclear whether the words “the at least one action” refer to the “at least one action performable by the robot body” recited on line 7 of claim 10 or whether the words refer to the “at least one action” recited on line 13 of claim 10. Accordingly, the claim is indefinite because the metes and bounds of the claim are unclear.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hausman et al. (US 2023/0311335 A1, hereinafter referred to as “Hausman”).
Regarding claim 1, Hausman discloses a method of operating a robot system (Figs. 1 and 5, elements 110 and/or 520) including a robot body (Figs. 1 and 5, elements 113-115 and/or 540; paragraphs 0051-0053, 0083, 0094, 0101-0103), the method comprising: capturing, by at least one sensor (via Figs. 1 and 5, elements 111 and 542) of the robot system, sensor data representing information about an environment (Fig. 1, element 180) of the robot body (paragraphs 0053-0055, 0058, 0101-0102); generating, by at least one processor of the robot system (paragraph 0103), a natural language (NL) statement comprising: a NL description (Fig. 2, element 201B, 204A, 204B) of at least one action (“find the table” and/or “go to the table”) performable by the robot body (paragraphs 0061, 0063, 0074-0075); and a NL description (Fig. 2, element 105) of a work objective (paragraph 0051); providing (Fig. 3, step 354 via Fig. 2, element 130) a query (Fig. 2, element 205A, 205B) to a robot control module (Fig. 5, element 560), the robot control module including a large language model (LLM) module (Fig. 2, element 130 and/or 150), wherein the query includes at least a portion (Fig. 2, element 202A, 202B) of the sensor data representing information about an environment (scene descriptions) of the robot body, the NL statement, and a request (Fig. 2, element 206A, 206B via Fig. 2, element 150) for a selection (Fig. 3, step 360 via Fig. 2, element 136, selection engine 130/136) of at least one action (robotic skill A and/or robotic skill F) performable by the robot body that, when performed by the robot body, will at least partially complete the work objective (paragraphs 0026, 0059-0066, 0071-0072, 0074-0077, 0081-0082, 0085, 0091, 0101 and 0103-0104); receiving (Fig. 2, element 213A, 213B) the selection of at least one action from the robot control module (paragraphs 0071 and 0082); and executing (Fig. 3, step 366 via Fig. 2, element 136; “implementation engine 136”) the at least one action by the robot system (paragraphs 0059, 0071-0072, 0081 and 0093).
Regarding claim 2, Hausman discloses the method of claim 1, further comprising: generating, by the at least one processor, at least one robot-language action based on the selection of at least one action, the robot-language action comprising robot control instructions which when executed by the at least one processor cause the robot system to perform the at least one action (paragraphs 0102-0104).
Regarding claim 3, Hausman discloses the method of claim 1, further comprising: generating, by at least one processor of the robot system, a NL description (Fig. 2, element 202A, 202B; “pear” or “keys” or “human” or “table” or “sink” or “countertops”) of at least one aspect of the environment (Fig. 1, elements 101, 184A, 184B, 191, 192, 193) based on the sensor data, wherein the at least a portion of the sensor data representing information about an environment of the robot body included in the query includes the NL description of at least one aspect of the environment based on the sensor data (paragraphs 0018, 0051, 0057-0058, 0061-0062, 0075).
Regarding claim 4, Hausman discloses the method of claim 1 wherein capturing, by at least one sensor of the robot system, sensor data representing information about an environment (Fig. 1, element 180) of the robot body includes capturing, by at least one image sensor of the robot system, image data of the environment of the robot body (paragraphs 0018, 0054-0055, 0058, 0062).
Regarding claim 5, Hausman discloses the method of claim 1 wherein the request for the selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective includes a request (Fig. 2, element 206A, 206B) for a task plan (Fig. 2, element 213A, 213B) that will complete the work objective (paragraph 0071, 0081). It is noted that request 206 is repeated until work objective 105 has been completed.
Regarding claim 6, Hausman discloses the method of claim 1 wherein receiving the selection of at least one action from the robot control module includes receiving the selection of at least one action expressed in NL from the robot control module (Fig. 2, element 213A and 213B; “go to the table” and “pick up a pear”; paragraphs 0071, 0081).
Regarding claim 7, Hausman discloses the method of claim 6, further comprising converting, by the at least one processor, the selection of at least one action expressed in NL to at least one robot-language action, each robot-language action comprising robot control instructions which when executed by the at least one processor cause the robot system to perform a respective action (paragraph 0102-0104).
Regarding claim 8, Hausman discloses the method of claim 7, wherein converting, by the at least one processor, the selection of at least one action expressed in NL to at least one robot-language action comprises executing a robot-language conversion module (Fig. 2, element 132, 136) which converts the at least one action performable by the robot system as expressed in NL (i.e., Fig. 2A, element 204A, “find the table”; Fig. 2B, element 201B, “go to the table”; Fig. 2B, element 204B, “go to the table”) to at least one reusable work primitive (“skill” see Fig. 2, element 213A, 213B; robotic skill A, robotic skill B) in an instruction set executable (Fig. 3, step 366) by the robot system (paragraphs 0059, 0063, 0065, 0070-0077, 0081-0082, 0093).
Regarding claim 9, Hausman discloses the method of claim 1, further comprising, prior to executing the at least one action, comparing the at least one action (Fig. 4, step 454) to a set of rules (Fig. 4, steps 456-464) specified in at least part of a reasoning engine (Fig. 4, element 400) to validate that the at least one action will at least partially complete the work objective (paragraphs 0026, 0094-0100).
Regarding claim 10, Hausman discloses a robot control module (Fig. 5, element 560) comprising at least one non-transitory processor-readable storage medium storing a large language model (LLM) module (Fig. 2, element 130, 150) and processor-executable instructions or data that, when executed by at least one processor of a robot system (Figs. 1 and 5, elements 110 and/or 520; paragraphs 0051, 0056, 0059-0064, 0101-0104, 0126), cause the robot system to: capture sensor data (via Figs. 1 and 5, elements 111 and 542) representing information about an environment (Fig. 1, element 180) of a robot body (Figs. 1 and 5, element 113-115, 540; paragraphs 0053-0055, 0058, 0101-0102); generate a natural language (NL) statement comprising: a NL description (Fig. 2, element 201B, 204A, 204B) of at least one action (“find the table” and/or “go to the table”) performable by the robot body (paragraphs 0061, 0063, 0074-0075); and a NL description (Fig. 2, element 105) of a work objective (paragraph 0051); generate (Fig. 3, step 354 via Fig. 2, element 130) a query (Fig. 2, element 205A, 205B), wherein the query includes at least a portion of the sensor data (Fig. 2, element 202A, 202B) representing information about an environment of the robot body, the NL statement, and a request (Fig. 2, element 206A, 206B via Fig. 2, element 150) for a selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective (paragraphs 0026, 0060-0066, 0074-0077 and 0085); determine (Fig. 3, step 360 via Fig. 2, element 136; selection engine 130/136) the selection of at least one action (Fig. 2, element 213A, 213B; robotic skill A, robotic skill F) from the robot control module (paragraph 0059, 0071-0072, 0081-0082 and 0091); and execute (Fig. 3, step 366 via Fig. 2, element 136; “implementation engine 136”) the at least one action by the robot system (paragraphs 0059, 0071-0072, 0081 and 0093).
Regarding claim 11, Hausman discloses the robot control module of claim 10 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to: generate at least one robot-language action based on the selection of at least one action, the robot-language action comprising robot control instructions which when executed by the at least one processor cause the robot system to perform the at least one action (paragraphs 0102-0104).
Regarding claim 12, Hausman discloses the robot control module of claim 10 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to: generate a NL description (Fig. 2, element 202A, 202B; “pear” or “keys” or “human” or “table” or “sink” or “countertops”) of at least one aspect of the environment (Fig. 1, elements 101, 184A, 184B, 191, 192, 193) based on the sensor data, wherein the at least a portion of the sensor data representing information about an environment of the robot body included in the query includes the NL description of at least one aspect of the environment based on the sensor data (paragraphs 0018, 0051, 0057-0058, 0061-0062, 0075).
Regarding claim 13, Hausman discloses the robot control module of claim 10 wherein the processor-executable instructions or data that, when executed by at least one processor of the robot system, cause the robot system to capture sensor data representing information about an environment (Fig. 1, element 180) of the robot body, cause at least one image sensor of the robot system to capture image data of the environment of the robot body (paragraphs 0018, 0054-0055, 0058 and 0062).
Regarding claim 14, Hausman discloses the robot control module of claim 10 wherein the request for the selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective includes a request (Fig. 2, element 206A, 206B) for a task plan (Fig. 2, element 213A, 213B) that will complete the work objective (paragraph 0071, 0081) It is noted that request 206 is repeated until work objective 105 has been completed.
Regarding claim 15, Hausman discloses the robot control module of claim 10 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to compare the at least one action (Fig. 4, step 454) to a set of rules (Fig. 4, steps 456-464) specified in at least part of a reasoning engine (Fig. 4, element 400) to validate that the at least one action will at least partially complete the work objective (paragraphs 0026, 0094-0100).
Regarding claim 16, Hausman discloses a robot system comprising: a robot body (Figs. 1 and 5, elements 110, 113-115, 520 and/or 540; paragraphs 0051-0053, 0101-0104); at least one sensor (Figs. 1 and 5, elements 111 and/or 542; paragraphs 0054-0055, 0058 and 0101); a robot controller (Fig. 5, element 560) including at least one processor and a robot control module comprising at least one non-transitory processor-readable storage medium, the at least one non-transitory processor-readable storage medium storing processor-executable instructions which when executed by the at least one processor (paragraphs 0101, 0103-0104 and 0126) cause the robot system to: capture sensor data representing information about an environment (Fig. 1, element 180) of a robot body (Figs. 1 and 5, element 113-115, 540; paragraphs 0053-0055, 0058, 0101-0102); generate a natural language (NL) statement comprising: a NL description (Fig. 2, element 201B, 204A, 204B) of at least one action (“find the table” and/or “go to the table”) performable by the robot body (paragraphs 0061, 0063, 0074-0075); and a NL description (Fig. 2, element 105) of a work objective (paragraph 0051); generate (Fig. 3, step 354 via Fig. 2, element 130) a query (Fig. 2, element 205A, 205B), wherein the query includes at least a portion of the sensor data (Fig. 2, element 202A, 202B) representing information about an environment of the robot body, the NL statement, and a request (Fig. 2, element 206A, 206B via Fig. 2, element 150) for a selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective (paragraphs 0026, 0060-0066, 0074-0077 and 0085); determine (Fig. 3, step 360 via Fig. 2, element 136; selection engine 130/136) the selection of at least one action (Fig. 2, element 213A, 213B; robotic skill A, robotic skill F) from the robot control module (paragraph 0059, 0071-0072, 0081-0082 and 0091); and execute (Fig. 3, step 366 via Fig. 2, element 136; “implementation engine 136”) the at least one action by the robot system (paragraphs 0059, 0071-0072, 0081 and 0093).
Regarding claim 17, Hausman discloses the robot system of claim 16 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to: generate at least one robot-language action based on the selection of at least one action, the robot-language action comprising robot control instructions which when executed by the at least one processor cause the robot system to perform the at least one action (paragraphs 0102-0104).
Regarding claim 18, Hausman discloses the robot system of claim 16 wherein the processor-executable instructions or data, when executed by at least one processor of the robot system, further cause the robot system to: generate a NL description (Fig. 2, element 202A, 202B; “pear” or “keys” or “human” or “table” or “sink” or “countertops”) of at least one aspect of the environment (Fig. 1, elements 101, 184A, 184B, 191, 192, 193) based on the sensor data, wherein the at least a portion of the sensor data representing information about an environment of the robot body included in the query includes the NL description of at least one aspect of the environment based on the sensor data (paragraphs 0018, 0051, 0057-0058, 0061-0062, 0075).
Regarding claim 19, Hausman discloses the robot system of claim 16 wherein the processor-executable instructions or data that, when executed by at least one processor of the robot system, cause the robot system to capture sensor data representing information about an environment (Fig. 1, element 180) of the robot body, cause at least one image sensor of the robot system to capture image data of the environment of the robot body (paragraphs 0018, 0054-0055, 0058 and 0062).
Regarding claim 20, Hausman discloses the robot system of claim 16 wherein the request for the selection of at least one action performable by the robot body that, when performed by the robot body, will at least partially complete the work objective includes a request (Fig. 2, element 206A, 206B) for a task plan (Fig. 2, element 213A, 213B) that will complete the work objective (paragraph 0071, 0081) It is noted that request 206 is repeated until work objective 105 has been completed.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DALE MOYER whose telephone number is (571)270-7821. The examiner can normally be reached Monday-Friday 8am-5pm PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi H Tran can be reached at 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Dale Moyer/Primary Examiner, Art Unit 3656