DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Pursuant to communications filed on 06 January 2025, this is a First Action Non-Final Rejection on the Merits. Claims 1-18 are currently pending in the instant application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 06 January 2025 and 03 February 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the Examiner.
Claim Objections
Claim 10 is objected to because of the following informalities: Regarding claim 10, line 14, Applicant provides the limitation, “generating, at a second time, second vision using the second vision component”, and it appears there is a typographical error, and it should read as, “generating, at a second time, second vision data using the second vision component”. Accordingly, Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-18 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3, 5-9 and 11 of U.S. Patent No. 12,190,221 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because they are coextensive in scope to the allowed claims and would be fully encompassed and/or anticipated by the issued U.S. Patent.
Specifically wherein;
Regarding claim 1, Applicant provides similar limitations as in claim 1 of the issued U.S. Patent, wherein both of the respective claim(s) include (similar limitations provided in bold):
A method implemented by one or more processors of a robot, the method comprising:
generating, at a first time, first vision data using one or more first vision components that are connected to the robot,
wherein the first vision data characterizes a portion of an area that the robot is approaching;
determining, based on the first vision data, that the portion of the area includes a surface that is traversable by the robot;
causing the robot to operate according to the determination that the portion of the area includes the surface that is traversable by the robot;
generating, at a second time, second vision data that characterizes the portion of the area,
wherein the second time is subsequent to the first time and is during the robot operating according to the determination that the portion of the area includes the surface that is traversable by the robot, and
wherein the second vision data is generated using one or more second vision components that are:
separate from the one or more first vision components, and also connected to the robot;
determining, based on the second vision data, that the portion of the area is not traversable by the robot; and
causing the robot to adjust its operation to operate according to the determination that the portion of the area is not traversable by the robot.
Although conflicting claims are not identical, they are not patentably distinct from each other because removing inherent and/or unnecessary limitation(s)/step(s) or adding an element and its function would be within the level of one of ordinary skill in the art. It is well settled that the adding or deleting of an element and its function(s) in the claim of the present application are an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA 1963). Also note Ex parte Rainu, 168 USPQ 375 (Bd. App. 1969). Omission of a referenced element or step whose function is not needed would be obvious to one of ordinary skill in the art. Examiner further notes wherein although the claims are not identical (slightly broader), they are commensurate in scope to the claim limitations provided in the issued U.S. Patent, and likewise would anticipate the currently provided claim limitations.
Regarding claims 2-9, Applicant provides similar limitations as provided throughout claims 1-3, 5 & 6 of the issued U.S. Patent. Although conflicting claims are not identical, they are not patentably distinct from each other because removing inherent and/or unnecessary limitation(s)/step(s) or adding an element and its function would be within the level of one of ordinary skill in the art. It is well settled that the adding or deleting of an element and its function(s) as in the claims of the present application are an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA 1963). Also note Ex parte Rainu, 168 USPQ 375 (Bd. App. 1969). Omission of a referenced element or step whose function is not needed would be obvious to one of ordinary skill in the art. Examiner further notes wherein although the claims are not identical (slightly broader), they are commensurate in scope to the claim limitations provided in the issued U.S. Patent, and likewise would anticipate the currently provided claim limitations.
Regarding claim 10, Applicant provides similar limitations as in claim 7 of the issued U.S. Patent, wherein both of the respective claim(s) include (similar limitations provided in bold):
A robot, comprising:
a first vision component;
a second vision component that is separate from the first vision component
one or more processors, and
memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations that include:
generating, at a first time, first vision data using the first vision component,
wherein the first vision data characterizes a portion of an area that the robot is approaching (viewable region);
determining, based on the first vision data, that the portion of the area includes a surface that is traversable by the robot;
causing the robot to operate according to the determination that the portion of the area includes the surface that is traversable by the robot;
generating, at a second time, second vision using the second vision component,
wherein the second vision data characterizes the portion of the area, and
wherein the second time is subsequent to the first time and is during the robot operating according to the determination that the portion of the area includes the surface that is traversable by the robot, and
determining, based on the second vision data, that the portion of the area is not traversable by the robot; and
causing the robot to adjust its operation to operate according to the determination that the portion of the area is not traversable by the robot.
Although conflicting claims are not identical, they are not patentably distinct from each other because removing inherent and/or unnecessary limitation(s)/step(s) or adding an element and its function would be within the level of one of ordinary skill in the art. It is well settled that the adding or deleting of an element and its function(s) in the claim of the present application are an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA 1963). Also note Ex parte Rainu, 168 USPQ 375 (Bd. App. 1969). Omission of a referenced element or step whose function is not needed would be obvious to one of ordinary skill in the art. Examiner further notes wherein although the claims are not identical (slightly broader), they are commensurate in scope to the claim limitations provided in the issued U.S. Patent, and likewise would anticipate the currently provided claim limitations.
Regarding claims 10-18, Applicant provides similar limitations as provided throughout claims 1, 5, 7-9 & 11 of the issued U.S. Patent. Although conflicting claims are not identical, they are not patentably distinct from each other because removing inherent and/or unnecessary limitation(s)/step(s) or adding an element and its function would be within the level of one of ordinary skill in the art. It is well settled that the adding or deleting of an element and its function(s) as in the claims of the present application are an obvious expedient if the remaining elements perform the same function as before. In re Karlson, 136 USPQ 184 (CCPA 1963). Also note Ex parte Rainu, 168 USPQ 375 (Bd. App. 1969). Omission of a referenced element or step whose function is not needed would be obvious to one of ordinary skill in the art. Examiner further notes wherein although the claims are not identical (slightly broader), they are commensurate in scope to the claim limitations provided in the issued U.S. Patent, and likewise would anticipate the currently provided claim limitations.
Examiner notes wherein the nonstatutory double patenting rejection(s) provided herein would be overcome with a timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 6-12 and 15-18 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Whitman et al (US 2021/0041887 A1, hereinafter Whitman).
Regarding claim 1, Whitman teaches a method implemented by one or more processors (Figures 1A-1B, processing hardware 142; at least as in paragraph 0043) of a robot (Figures 1A-1B, robot 100), the method comprising:
generating, at a first time, first vision data using one or more first vision components (Figures 1A-1B, sensors 132; at least as in paragraph 0041, wherein “the robot 100 includes a sensor system 130 with one or more sensors 132, 132a-n (e.g., shown as a first sensor 132, 132a and a second sensor 132, 132b)”)that are connected to the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraph 0044, wherein “perception system 200 is configured to receive the sensor data 134 from the sensor system 130 and to process the sensor data 134 into maps 210, 220, 230, 240”, and additionally as in the referenced sections wherein the “original sensor data 134” is construed as the “first vision data”),
wherein the first vision data characterizes a portion of an area that the robot is approaching (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraphs 0052-0053, wherein “the perception system 200 is configured to classify the voxel map 210 (e.g., classify segments 214) to identify portions that correspond to the ground (i.e., a geometric area that the perception system 200 interprets that the robot 100 can step on), obstacles (i.e., a geometric area that the perception system 200 interprets that may interfere with movement of the robot 100), or neither the ground nor an obstacle (e.g., something above the robot 100 that that can be ignored)”);
determining, based on the first vision data, that the portion of the area includes a surface that is traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, specifically wherein “ground segment(s) 214G” is/are identified/classified) Examiner notes wherein the “ground segment(s) 214G” correlate to “a surface that is traversable by the robot”.;
causing the robot to operate according to the determination that the portion of the area includes the surface that is traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraph 0044, wherein “With the maps 210, 220, 230, 240 generated by the perception system 200, the perception system 200 may communicate the maps 210, 220, 230, 240 to the control system 170 in order perform controlled actions for the robot 100, such as moving the robot 100 about the environment 10” and further at least as in Figures 6-9 and related text, wherein the control system of the robot is configured to move the robot about the environment according to the respective map(s) identifying ground segments (i.e. traversable surface(s)) and obstacle segments (i.e. non-traversable area(s));
generating, at a second time, second vision data that characterizes the portion of the area (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraph 0044, wherein “perception system 200 is configured to receive the sensor data 134 from the sensor system 130 and to process the sensor data 134 into maps 210, 220, 230, 240”, and additionally as in the referenced sections wherein the “current sensor data 134” is construed as the “second vision data”),
wherein the second time is subsequent to the first time and is during the robot operating according to the determination that the portion of the area includes the surface that is traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053, 0069 and 0107-0109, at least as in the referenced sections wherein the “current sensor data 134” is subsequent to the “previous/original sensor data 134”), and
wherein the second vision data is generated using one or more second vision components that are: separate from the one or more first vision components, and also connected to the robot (Figures 1A-1B, sensors 132; at least as in paragraph 0041, wherein “the robot 100 includes a sensor system 130 with one or more sensors 132, 132a-n (e.g., shown as a first sensor 132, 132a and a second sensor 132, 132b)”);
determining, based on the second vision data, that the portion of the area is not traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, specifically wherein “obstacle segment(s) 214OB” is/are identified/classified) Examiner notes wherein the “ground segment(s) 214OB” correlate to a “portion of the area that is not traversable by the robot”.; and
causing the robot to adjust its operation to operate according to the determination that the portion of the area is not traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraph 0044, wherein “With the maps 210, 220, 230, 240 generated by the perception system 200, the perception system 200 may communicate the maps 210, 220, 230, 240 to the control system 170 in order perform controlled actions for the robot 100, such as moving the robot 100 about the environment 10” and further at least as in Figures 6-9 and related text, wherein the control system of the robot is configured to move the robot about the environment according to the respective map(s) identifying ground segments (i.e. traversable surface(s)) and obstacle segments (i.e. non-traversable area(s)).
Regarding claim 2, Whitman further teaches wherein determining, based on the second vision data, that the portion of the area includes the surface that is traversable by the robot includes:
identifying, based on the second vision data, one or more particular objects that are present in the area (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110), and
determining a height of the one or more particular objects relative to a ground surface that is supporting the robot when the one or more second vision components captured the second vision data (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-011
wherein determining that the portion of the area includes the surface that is traversable by the robot is at least partially based on the height of the one or more particular objects (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110).
Regarding claim 3, Whitman further teaches wherein determining, based on the second vision data, that the portion of the area includes the surface that is traversable by the robot includes:
processing portions of the second vision data that do not correspond to one or more particular objects identified, via the second vision data, as present in the area (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110), and
determining that the portion of the area includes the surface that is traversable by the robot at least partially based on processing the portions of the second vision data that do not correspond to one or more particular objects identified (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110).
Regarding claim 6, Whitman further teaches wherein the one or more second vision components include a LIDAR device (at least as in paragraphs 0041 and 0059).
Regarding claim 7, Whitman further teaches wherein the one or more first vision components include a camera (at least as in paragraphs 0041 and 0059).
Regarding claim 8, Whitman further teaches wherein the one or more second vision components consist of the LIDAR device and wherein the one or more first vision components consist of the camera (at least as in paragraphs 0041 and 0059).
Regarding claim 9, Whitman further teaches wherein determining that the portion of the area includes the surface that is traversable by the robot includes determining whether the robot can autonomously drive over the surface (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110).
Regarding claim 10, Whitman teaches a robot (Figures 1A-1B, robot 100), comprising:
a first vision component (Figures 1A-1B, sensors 132; at least as in paragraph 0041, wherein “the robot 100 includes a sensor system 130 with one or more sensors 132, 132a-n (e.g., shown as a first sensor 132, 132a and a second sensor 132, 132b)”);
a second vision component that is separate from the first vision component (Figures 1A-1B, sensors 132; at least as in paragraph 0041, wherein “the robot 100 includes a sensor system 130 with one or more sensors 132, 132a-n (e.g., shown as a first sensor 132, 132a and a second sensor 132, 132b)”)
one or more processors (Figures 1A-1B, processing hardware 142; at least as in paragraph 0043), and
memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations (Figures 1A-1B, memory hardware 144; at least as in paragraph 0043) that include:
generating, at a first time, first vision data using the first vision component (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraph 0044, wherein “perception system 200 is configured to receive the sensor data 134 from the sensor system 130 and to process the sensor data 134 into maps 210, 220, 230, 240”, and additionally as in the referenced sections wherein the “original sensor data 134” is construed as the “first vision data”),
wherein the first vision data characterizes a portion of an area that the robot is approaching (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraphs 0052-0053, wherein “the perception system 200 is configured to classify the voxel map 210 (e.g., classify segments 214) to identify portions that correspond to the ground (i.e., a geometric area that the perception system 200 interprets that the robot 100 can step on), obstacles (i.e., a geometric area that the perception system 200 interprets that may interfere with movement of the robot 100), or neither the ground nor an obstacle (e.g., something above the robot 100 that that can be ignored)”);
determining, based on the first vision data, that the portion of the area includes a surface that is traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, specifically wherein “ground segment(s) 214G” is/are identified/classified) Examiner notes wherein the “ground segment(s) 214G” correlate to “a surface that is traversable by the robot”.;
causing the robot to operate according to the determination that the portion of the area includes the surface that is traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraph 0044, wherein “With the maps 210, 220, 230, 240 generated by the perception system 200, the perception system 200 may communicate the maps 210, 220, 230, 240 to the control system 170 in order perform controlled actions for the robot 100, such as moving the robot 100 about the environment 10” and further at least as in Figures 6-9 and related text, wherein the control system of the robot is configured to move the robot about the environment according to the respective map(s) identifying ground segments (i.e. traversable surface(s)) and obstacle segments (i.e. non-traversable area(s));
generating, at a second time, second vision using the second vision component (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraph 0044, wherein “perception system 200 is configured to receive the sensor data 134 from the sensor system 130 and to process the sensor data 134 into maps 210, 220, 230, 240”, and additionally as in the referenced sections wherein the “current sensor data 134” is construed as the “second vision data”),
wherein the second vision data characterizes the portion of the area (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraphs 0052-0053, wherein “the perception system 200 is configured to classify the voxel map 210 (e.g., classify segments 214) to identify portions that correspond to the ground (i.e., a geometric area that the perception system 200 interprets that the robot 100 can step on), obstacles (i.e., a geometric area that the perception system 200 interprets that may interfere with movement of the robot 100), or neither the ground nor an obstacle (e.g., something above the robot 100 that that can be ignored)”), and
wherein the second time is subsequent to the first time and is during the robot operating according to the determination that the portion of the area includes the surface that is traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053, 0069 and 0107-0109, at least as in the referenced sections wherein the “current sensor data 134” is subsequent to the “previous/original sensor data 134”), and
determining, based on the second vision data, that the portion of the area is not traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, specifically wherein “obstacle segment(s) 214OB” is/are identified/classified) Examiner notes wherein the “ground segment(s) 214OB” correlate to a “portion of the area that is not traversable by the robot”.; and
causing the robot to adjust its operation to operate according to the determination that the portion of the area is not traversable by the robot (Figures 1A-1B, 2C & 6-9; at least as in paragraphs 0044-0045, 0049, 0052-0053 and 0107-0109, at least as in paragraph 0044, wherein “With the maps 210, 220, 230, 240 generated by the perception system 200, the perception system 200 may communicate the maps 210, 220, 230, 240 to the control system 170 in order perform controlled actions for the robot 100, such as moving the robot 100 about the environment 10” and further at least as in Figures 6-9 and related text, wherein the control system of the robot is configured to move the robot about the environment according to the respective map(s) identifying ground segments (i.e. traversable surface(s)) and obstacle segments (i.e. non-traversable area(s)).
Regarding claim 11, Whitman further teaches wherein determining, based on the second vision data, that the portion of the area includes the surface that is traversable by the robot includes:
identifying, based on the second vision data, one or more particular objects that are present in the area (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110), and
determining a height of the one or more particular objects relative to a ground surface that is supporting the robot when the one or more second vision components captured the second vision data (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110),
wherein determining that the portion of the area includes the surface that is traversable by the robot is at least partially based on the height of the one or more particular objects (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110).
Regarding claim 12, Whitman further teaches wherein determining, based on the second vision data, that the portion of the area includes the surface that is traversable by the robot includes:
processing portions of the second vision data that do not correspond to one or more particular objects identified, via the second vision data, as present in the area (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110), and
determining that the portion of the area includes the surface that is traversable by the robot at least partially based on processing the portions of the second vision data that do not correspond to one or more particular objects identified (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110).
Regarding claim 15, Whitman further teaches wherein the second vision component is a LIDAR device (at least as in paragraphs 0041 and 0059).
Regarding claim 16, Whitman further teaches wherein the first vision component is a stereographic camera (at least as in paragraphs 0041 and 0059).
Regarding claim 17, Whitman further teaches wherein the first vision component is a stereographic camera (at least as in paragraphs 0041 and 0059).
Regarding claim 18, Whitman further teaches wherein determining that the portion of the area includes the surface that is traversable by the robot includes determining whether the robot can autonomously drive over the surface (Figures 2C, 4A-4B & 6-9; at least as in paragraphs 0052-0053, 0086-0089 and 0107-0110).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 4-5 and 13-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Whitman et al (US 2021/0041887 A1, hereinafter Whitman) in view of Rankawat et al (US 2019/0286153 A1, hereinafter Rankawat).
The teachings of Whitman have been discussed above.
Regarding claim 4, Whitman is silent specifically regarding wherein determining, based on the first vision data, that the portion of the area is not traversable by the robot comprises processing the first vision data using a machine learning model. Rankawat, in the same field of endeavor, teaches an autonomous/robotic vehicle acquires image data of the environment around said autonomous/robotic vehicle and utilizes said image data with a machine learning model(s) to determine drivable free-space and non-drivable space proximate to said autonomous/robotic vehicle (Figures 1A & 2; at least as in paragraphs 0039-0041, 0046 and 0114-0119). Therefore, it would have been obvious to one of ordinary skill in the art at the effective filing date of the instant invention, to modify the teachings of Whitman, to include Rankawat’s teachings of employing machine learning models with received/acquired imaging data from the autonomous/robotic vehicle to determine drivable free-space(s) and/or non-drivable space(s), since Rankawat teaches wherein such systems/methods are computationally less expensive, more contextually informative, more efficient to run in real-time, and provide improved accuracy in navigating an autonomous/robotic vehicle through a real-world physical environment safely.
Regarding claim 5, in view of the above combination of Whitman and Rankawat, Rankawat further teaches wherein the machine learning model is trained using one or more instances of training data that include vision data characterizing one or more particular surfaces and label data characterizing drivability of the one or more particular surfaces (Figures 1A & 2; at least as in paragraphs 0039-0041, 0046 and 0114-0119).
Regarding claim 13, Whitman is silent specifically regarding wherein determining, based on the first vision data, that the portion of the area is not traversable by the robot comprises processing the first vision data using a machine learning model. Rankawat, in the same field of endeavor, teaches an autonomous/robotic vehicle acquires image data of the environment around said autonomous/robotic vehicle and utilizes said image data with a machine learning model(s) to determine drivable free-space and non-drivable space proximate to said autonomous/robotic vehicle (Figures 1A & 2; at least as in paragraphs 0039-0041, 0046 and 0114-0119). Therefore, it would have been obvious to one of ordinary skill in the art at the effective filing date of the instant invention, to modify the teachings of Whitman, to include Rankawat’s teachings of employing machine learning models with received/acquired imaging data from the autonomous/robotic vehicle to determine drivable free-space(s) and/or non-drivable space(s), since Rankawat teaches wherein such systems/methods are computationally less expensive, more contextually informative, more efficient to run in real-time, and provide improved accuracy in navigating an autonomous/robotic vehicle through a real-world physical environment safely.
Regarding claim 14, in view of the above combination of Whitman and Rankawat, Rankawat further teaches wherein the machine learning model is trained using one or more instances of training data that include vision data characterizing one or more particular surfaces and label data characterizing drivability of the one or more particular surfaces (Figures 1A & 2; at least as in paragraphs 0039-0041, 0046 and 0114-0119).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892 – Notice of References Cited form. Examiner additionally notes the following prior art references, in the same field of endeavor as the instant invention, and also reads on several of the currently provided claim limitations above;
US 10,583,562 B2, issued to Stout et al, which is directed towards a method and system for complete coverage of a surface by an autonomous robot.
US 10,369,696 B1, issued to Russell et al, which is directed towards a spatiotemporal robot reservation system and method for a robot to traverse through a given spatial region.
US 2008/0059015 A1, issued to Whittaker et al, which is directed towards systems, methods and apparatuses for high-speed navigation of terrain by an unmanned robot.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN L SAMPLE whose telephone number is (571)270-5925. The examiner can normally be reached Monday-Friday 7:00am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Mott can be reached at (571)270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN L SAMPLE/
Primary Examiner, Art Unit 3657