Prosecution Insights
Last updated: April 17, 2026
Application No. 18/366,761

LIGHT WEIGHT AND REAL TIME SLAM FOR ROBOTS

Non-Final OA §102§103§DP
Filed
Aug 08, 2023
Examiner
TRAN, DALENA
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
97%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
943 granted / 1076 resolved
+35.6% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
17 currently pending
Career history
1093
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
35.1%
-4.9% vs TC avg
§102
29.4%
-10.6% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1076 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s election of invention II (claims 42-114) in the reply field on 11/4/25 is acknowledged. Because applicant did not distinctly and specifically point out the supposed errors in the restriction requirement, the election has been treated as an election without traverse (MPEP 818.01(a). Claims 42-114 are pending. Non-elected claims 1-41 are withdrawn from consideration. Non-elected claims 1-41 should be cancelled in the reply of this Office action. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 42 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 29 of U.S. Patent No. 11768504 (refers as ‘504). Although the claims at issue are not identical, they are not patently distinct from each other because subject matters of the invention defined in claim 42 in 18/366761 would have been anticipated by the invention defined in claim 29 of (‘504). Claim 29 of (‘504) have all the limitations of claim 42 of current application. The difference is claim 29 of (‘504) have additional same limitations included in claim 43 of the present application. Claim 43 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 29 of U.S. Patent No. 11768504 (refers as ‘504). Although the claims at issue are not identical, they are not patently distinct from each other because subject matters of the invention defined in claim 43 in 18/366761 would have been anticipated by the invention defined in claim 29 of (‘504). Claim 29 of (‘504) have all the limitations of claim 43 of current application. Claims 44-64, are all depended on claim 42 rejected as above. Claim 65 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 30 of U.S. Patent No. 11768504 (refers as ‘504). Although the claims at issue are not identical, they are not patently distinct from each other because subject matters of the invention defined in claim 65 in 18/366761 would have been anticipated by the invention defined in claim 30 of (‘504). Claim 30 of (‘504) have all the limitations of claim 65 of current application. The difference is claim 30 of (‘504) have additional same limitations included in claim 66 of the present application. Claim 66 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 30 of U.S. Patent No. 11768504 (refers as ‘504). Although the claims at issue are not identical, they are not patently distinct from each other because subject matters of the invention defined in claim 66 in 18/366761 would have been anticipated by the invention defined in claim 30 of (‘504). Claim 30 of (‘504) have all the limitations of claim 66 of current application. Claims 67-114, are depended on claim 65 rejected as above. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 42-43, 45-50, 52-54, 57, 61, 64-66, 68-82, 84-85, 87-94, 97-100, and 110-114, are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Fong et al. (US 2019/0212752 A1). As per claim 42, Fong et al. disclose a tangible, non-transitory, machine readable medium storing instructions that when executed by a processor of a cleaning robot effectuates operations comprising: capturing, by a LIDAR of the cleaning robot, LIDAR data as the cleaning robot performs work within an environment of the cleaning robot, wherein the LIDAR data is indicative of distance from a perspective of the LIDAR to obstacles immediately surrounding the cleaning robot and within reach of a maximum range of the LIDAR (see at least [0141-0143] disclose the cleaning robot performs work within an environment of the cleaning robot, the obstacle detection sensors to detect distance to obstacle); generating, by the processor, a first iteration of a map of the environment in real time at a first position of the cleaning robot based on at least a portion of any of the LIDAR data and sensor data captured by sensors of the cleaning robot, wherein the map is a bird's-eye view of the environment (see at least [0143-0145] disclose generating first iteration of a map of the environment in real time, the mapping sensors include, for example, SLAM sensors, dead reckoning sensors, and obstacle detection and avoidance sensors. The controller 706 constructs a two-dimensional map of the floor surface of the home 300, position of portions of the home 300 that the mobile robot cleaning robot 102 can traverse); capturing, by at least some of the sensors of the cleaning robot, sensor data from different positions within the environment as the cleaning robot performs work in the environment, wherein: newly captured sensor data partly overlaps with previously captured sensor data; at least a portion of the newly captured sensor data comprises distances to obstacles that were not visible by the sensors from a previous position of the robot from which the previously captured sensor data was obtained; and the newly captured sensor data is integrated into a previous iteration of the map to generate a larger map of the environment (see at least [0020-0025] disclose control the at least one sensor to sense the environment to identify a first set of features in the environment, update the map to add features sensed by the at least one sensor that are not already on the map; and para. [0077-0078] disclose a map merge module 124 analyzes two or more versions of the persistent map (e.g., one version is the current map, and another version is an updated map provided by one of the robots) to generate a merge version of the persistent map); capturing, by at least one of an IMU sensor, a gyroscope, and a wheel encoder of the cleaning robot, movement data indicative of movement of the cleaning robot (see at least [0144] disclose IMU sensor, and gyroscope capture the movement data indicative of movement of the mobile robot); aligning and integrating, with the processor, newly captured LIDAR data captured from consecutive positions of the cleaning robot with previously captured LIDAR data captured from previous positions of the cleaning robot at overlapping points between the newly captured LIDAR data and the previously captured LIDAR data (see at least [0020-0025] disclose control the at least one sensor to sense the environment to identify a first set of features in the environment, update the map to add features sensed by the at least one sensor that are not already on the map; and para. [0077-0078] disclose a map merge module 124 analyzes two or more versions of the persistent map (e.g., one version is the current map, and another version is an updated map provided by one of the robots) to generate a merge version of the persistent map); generating, by the processor, additional iterations of the map based on at least a portion of any of the newly captured LIDAR data and the newly captured sensor data captured as the cleaning robot traverses into new and undiscovered areas of the environment, wherein successive iterations of the map are larger in size due to the addition of newly discovered areas (see at least [0003-0010] disclose control the sensor to sense the environment to identify a set of features in the environment, update the persistent map to add representations of the features sensed by the at least one sensor that are not already on the persistent map; and para. [0077-0081]); identifying, by the processor, a room in the map based on at least a portion of any of the LIDAR data, the sensor data, and the movement data (see at least [0033-0037], and [0079-0083], all para. disclose label each the plurality of rooms in the map); localizing, by the processor, the cleaning robot within the map of the environment in real time and simultaneously to generating the map based on at least a portion of any of the LIDAR data, at least some of the sensor data, and the movement data (see at least [0124-0128] disclose localizing the cleaning robot within the map of the environment in real time and simultaneously to generate the map); planning, by the processor, a path of the cleaning robot; and actuating, by the processor, the cleaning robot to drive along a trajectory that follows along the planned path (see at least [0116-0121], and [0141-0143], all para. disclose the mobile robot navigated along a trajectory that follows along the planned path to perform a cleaning mission) wherein: a coverage tracker executed by the processor deems an operational session complete and transitions the cleaning robot to a state that actuates the cleaning robot to find a charging station (see at least [0150-0151] disclose the sensing system 708 also includes condition sensors indicative of conditions of the mobile cleaning robot 102. These sensors include, for instance, battery charge state sensors to detect an amount of charge or capacity for charge on a power source); the map is stored in a memory accessible to the processor during a subsequent operational session of the cleaning robot (see at least [0077-0081] disclose the map merge module 124 implements a “transfer function” for updating the map 120. The robot starts its mission with a user-defined set of rooms. The robot executes the mission, and then executes the room segmentation algorithm at the end of the mission; also para. [0090-0093]); the map is transmitted to an application of a smart phone device previously paired with the processor of the robot using a wireless card coupled with the single microcontroller via the internet or a local network; and the application is configured to display the map on a screen of the smart phone (see at least [0088-0093] disclose the user of the robots 102a, 102b has approved the robots to transmit map information about the home to the remote computing device 116). As per claim 43, Fong et al. disclose determining, by the processor, all areas of the environment are discovered and included in the map based on at least all the newly captured LIDAR data overlapping with the previously captured LIDAR data (see at least [0020-0025] disclose control the at least one sensor to sense the environment to identify a first set of features in the environment, update the map to add features sensed by the at least one sensor that are not already on the map; and para. [0077-0078] disclose a map merge module 124 analyzes two or more versions of the persistent map (e.g., one version is the current map, and another version is an updated map provided by one of the robots) to generate a merge version of the persistent map) the cleaning robot is actuated to drive along the trajectory that follows along the planned path by providing pulses to one or more electric motors of wheels of the cleaning robot (see at least [0116-0121], and [0141-0143], all para. disclose the mobile robot navigated along a trajectory that follows along the planned path to perform a cleaning mission) the processor is a processor of a single microcontroller; the processor of the robot executes a simultaneous localization and mapping task in concurrence with a path planning task, an obstacle avoidance task, a coverage tracker task, a control task, and a cleaning operation task by time-sharing computational resources of the single microcontroller (see at least [0143-0145] disclose generating first iteration of a map of the environment in real time, the mapping sensors include, for example, SLAM sensors, dead reckoning sensors, and obstacle detection and avoidance sensors. The controller 706 constructs a two-dimensional map of the floor surface of the home 300, position of portions of the home 300 that the mobile robot cleaning robot 102 can traverse); a scheduler assigns a time slice of the single microcontroller to each of the simultaneous localization and mapping task, the path planning task, the obstacle avoidance task, the coverage tracker task, the control task, and the cleaning operation task according to an importance value assigned to each task (see at least [0071-0073] disclose maintain schedule for robot task); and the scheduler preempts lower priority tasks with higher priority tasks, preempts all tasks by an interrupt service request when invoked, and runs a routine associated with the interrupt service request (see at least [0112-0115], and [0127-0129], all para. disclose scheduler assigned the priority of tasks for the robot). As per claim 45, Fong et al. disclose the application is further configured to display the map in 2D and 3D; and the 3D map includes furniture and appliances within the environment (see at least [0192-0193], and [0135-0140] disclose the map includes the dining room, table, furniture). As per claim 46, Fong et al. disclose the application is configured to: receive at least one input designating an adjustment to the map; a new subarea within the map; a schedule for cleaning; an instruction to start cleaning; a no-go zone; a label for a subarea within the map; and a suction power; and display a robot status; a quantity of total area cleaned; a cleaning duration; a cleaning history; and a battery level (see at least [0003-0009] disclose input designating an update of the map; and para. [0033-0041] disclose instruction a schedule for cleaning, a label for a subarea within the map, the keep-out zone or a no-touch zone; also para. [0071] disclose the conditions of the debris bins and batteries of the robots). As per claim 47, Fong et al. disclose the application is further configured to: receive at least one input designating an addition, deletion, rotation, or movement of a boundary within the map; a quiet mode; a deep clean; a number of cleaning passes; a privacy setting; an instruction for the robot to clean an area in close proximity to a particularly labelled object; a deletion or an addition of a robot paired with the application; an instruction for the cleaning robot to empty a bin of the cleaning robot into a bin of a charging station; and an instruction for the cleaning robot to dock at the charging station; display a debris map; issues encountered; an estimated cleaning duration required to clean the environment or a subarea of the environment; an object and object type of the object; and firmware information; and the application is configured to execute an over the air firmware update (see at least [0033-0041] disclose instruction a schedule for cleaning, a label for a subarea within the map, the keep-out zone or a no-touch zone; and para. [0071], and [0150], all para. disclose the conditions of the debris bins and batteries of the robots. As per claim 48, Fong et al. disclose the application is further configured to receive at least one input designating an instruction for a second robot to execute a second task after the cleaning robot completes a first task (see at least [0011-0018], [0033-0037], and [0084-0085], all para. disclose instruction for a second robot to execute a second task after the cleaning robot completes a first task). As per claim 49, Fong et al. disclose the application is further configured to receive at least one input designating a do not disturb status for the cleaning robot see at least [0033-0041] disclose input designating the keep-out zone or a no-touch zone). As per claim 50, Fong et al. disclose receive at least one input designating an order of coverage of rooms by the cleaning robot; and display a path of the cleaning robot (see at least [0079-0083] disclose the user designate an order of coverage of rooms for the robot cleaning). As per claim 52, Fong et al. disclose the cleaning robot further comprises a microphone; and the operations further comprise: detecting, by the processor, a direction from which a verbal command is received from a user based on at least the acoustic data captured by the microphone (see at least [0074-0076] disclose the robot comprise a microphone and the robot receive a verbal command from a user). As per claim 53, Fong et al. disclose the cleaning robot is paired with a home assistant configured to receive a verbal instruction for the cleaning robot to clean an area in close proximity to a particularly labelled object or a subarea of the environment; and the operations further comprise: executing, with the cleaning robot, the instruction (see at least [0119-0121] disclose the cleaning robot is paired with a home assistant configured to receive a verbal instruction for the cleaning robot the cleaning robot is paired with a home assistant configured to receive a verbal instruction from a user for the robot to clean). As per claim 54, Fong et al. disclose actuating, by the processor, the robot to perform work based on a detected presence or absence of the user by the processor or the application, wherein the cleaning robot is actuated to operate within the environment when the user is absent from the environment (see at least [0120-0121] disclose the robot to perform work based on a detected presence or absence of the user). As per claim 57, Fong et al. disclose the operations further comprise: inferring, by the processor, locations with debris accumulation based on second sensor data captured with a second sensor of the sensors; and adjusting, by the processor, the path of the cleaning robot based on the locations with debris accumulation (see at least [0163] disclose the controller can control the vacuum cleaning robot to perform the spot cleaning operation in response to detection of the debris by the debris sensor). As per claim 61, Fong et al. disclose ultrasonic oscillator (see at least [0156]). As per claim 64, Fong et al. disclose the cleaning robot provides a notification to a user by at least one of generating a noise, a visual light indicator, or transmitting the notification to the application (see at least [0147] disclose the cleaning robot provided a notification to a user by causing emission of an audible alert). As per claim 65, Fong et al. disclose a cleaning robot, comprising: a chassis; a set of wheels; a LIDAR; sensors; a processor (see at least [0074-0076] disclose a cleaning robot comprising a chassis, a set of wheels, a camera, sensors and a processor); and a tangible, non-transitory, machine readable medium storing instructions that when executed by the processor effectuates operations comprising: capturing, by the LIDAR, LIDAR data as the cleaning robot performs work within an environment of the cleaning robot, wherein the LIDAR data is indicative of distance from a perspective of the LIDAR to obstacles immediately surrounding the cleaning robot and within reach of a maximum range of the LIDAR (see at least [0141-0143] disclose the cleaning robot performs work within an environment of the cleaning robot, the obstacle detection sensors to detect distance to obstacle); generating, by the processor, a first iteration of a map of the environment in real time at a first position of the cleaning robot based on the LIDAR data and at least some sensor data captured by sensors, wherein the map is a bird's-eye view of the environment (see at least [0143-0145] disclose generating first iteration of a map of the environment in real time, the mapping sensors include, for example, SLAM sensors, dead reckoning sensors, and obstacle detection and avoidance sensors. The controller 706 constructs a two-dimensional map of the floor surface of the home 300, position of portions of the home 300 that the mobile robot cleaning robot 102 can traverse); capturing, by at least some of the sensors, sensor data from different positions within the environment as the cleaning robot performs work in the environment, wherein: newly captured sensor data partly overlaps with previously captured sensor data; at least a portion of the newly captured sensor data comprises distances to obstacles that were not visible by the sensors from a previous position of the robot from which the previously captured sensor data was obtained; and the newly captured sensor data is integrated into a previous iteration of the map to generate a larger map of the environment (see at least [0020-0025] disclose control the at least one sensor to sense the environment to identify a first set of features in the environment, update the map to add features sensed by the at least one sensor that are not already on the map; and para. [0077-0078] disclose a map merge module 124 analyzes two or more versions of the persistent map (e.g., one version is the current map, and another version is an updated map provided by one of the robots) to generate a merge version of the persistent map); capturing, by at least one of an IMU sensor, a gyroscope, and a wheel encoder of the cleaning robot, movement data indicative of movement of the cleaning robot (see at least [0144] disclose IMU sensor, and gyroscope capture the movement data indicative of movement of the mobile robot); identifying, by the processor, a room in the map based on at least a portion of any of the LIDAR data, the sensor data, and the movement data (see at least [0033-0037], and [0079-0083], all para. disclose label each the plurality of rooms in the map); planning, by the processor, a path of the cleaning robot; and actuating, by the processor, the cleaning robot to drive along a trajectory that follows along the planned path (see at least [0116-0121], and [0141-0143], all para. disclose the mobile robot navigated along a trajectory that follows along the planned path to perform a cleaning mission) the map is stored in a memory accessible to the processor during a subsequent operational session of the cleaning robot (see at least [0077-0081] disclose the map merge module 124 implements a “transfer function” for updating the map 120. The robot starts its mission with a user-defined set of rooms. The robot executes the mission, and then executes the room segmentation algorithm at the end of the mission; also para. [0090-0093]); and the application is configured to display the map on a screen of the smart phone (see at least [0088-0093] disclose the user of the robots 102a, 102b has approved the robots to transmit map information about the home to the remote computing device 116). (see at least [0033-0037], and [0079-0083], all para. disclose label each the plurality of rooms in the map). As per claim 66, Fong et al. disclose the operations further comprise: aligning and integrating, with the processor, newly captured LIDAR data captured from consecutive positions of the cleaning robot with previously captured LIDAR data captured from previous positions of the cleaning robot at overlapping points between the newly captured LIDAR data and the previously captured LIDAR data (see at least [0020-0025] disclose control the at least one sensor to sense the environment to identify a first set of features in the environment, update the map to add features sensed by the at least one sensor that are not already on the map; and para. [0077-0078] disclose a map merge module 124 analyzes two or more versions of the persistent map (e.g., one version is the current map, and another version is an updated map provided by one of the robots) to generate a merge version of the persistent map); generating, by the processor, additional iterations of the map based on the newly captured LIDAR data and at least some of the newly captured sensor data captured as the cleaning robot traverses into new and undiscovered areas of the environment, wherein successive iterations of the map are larger in size due to the addition of newly discovered areas (see at least [0003-0010] disclose control the sensor to sense the environment to identify a set of features in the environment, update the persistent map to add representations of the features sensed by the at least one sensor that are not already on the persistent map; and para. [0077-0081]); determining, by the processor, all areas of the environment are discovered and included in the map based on at least all the newly captured LIDAR data overlapping with the previously captured LIDAR data (see at least [0020-0025] disclose control the at least one sensor to sense the environment to identify a first set of features in the environment, update the map to add features sensed by the at least one sensor that are not already on the map; and para. [0077-0078] disclose a map merge module 124 analyzes two or more versions of the persistent map (e.g., one version is the current map, and another version is an updated map provided by one of the robots) to generate a merge version of the persistent map) and localizing, by the processor, the cleaning robot within the map of the environment in real time and simultaneously to generating the map based on the LIDAR data, at least some of the sensor data, and the movement data (see at least [0124-0128] disclose localizing the cleaning robot within the map of the environment in real time and simultaneously to generate the map); wherein: the cleaning robot is actuated to drive along the trajectory that follows along the planned path by providing pulses to one or more electric motors of wheels of the cleaning robot (see at least [0116-0121], and [0141-0143], all para. disclose the mobile robot navigated along a trajectory that follows along the planned path to perform a cleaning mission); the processor is a processor of a single microcontroller; the processor of the robot executes a simultaneous localization and mapping task in concurrence with a path planning task, an obstacle avoidance task, a coverage tracker task, a control task, and a cleaning operation task by time-sharing computational resources of the single microcontroller (see at least [0143-0145] disclose generating first iteration of a map of the environment in real time, the mapping sensors include, for example, SLAM sensors, dead reckoning sensors, and obstacle detection and avoidance sensors. The controller 706 constructs a two-dimensional map of the floor surface of the home 300, position of portions of the home 300 that the mobile robot cleaning robot 102 can traverse); a scheduler assigns a time slice of the single microcontroller to each of the simultaneous localization and mapping task, the path planning task, the obstacle avoidance task, the coverage tracker task, the control task, and the cleaning operation task according to an importance value assigned to each task; the scheduler preempts lower priority tasks with higher priority tasks, preempts all tasks by an interrupt service request when invoked, and runs a routine associated with the interrupt service request see at least [0112-0115], and [0127-0129], all para. disclose scheduler assigned the priority of tasks for the robot). a coverage tracker executed by the processor deems an operational session complete and transitions the cleaning robot to a state that actuates the cleaning robot to find a charging station (see at least [0098] disclose at the end of the cleaning sessions, the robots return to their respective docking station to recharge their batteries); and the map is transmitted to an application of a smart phone device previously paired with the processor of the robot using a wireless card coupled with the single microcontroller via the internet or a local network (see at least [0088-0093] disclose the robots transmit map information to the remote computing device). As per claim 68, Fong et al. disclose the operations further comprise: capturing, with an image sensor coupled with a camera controller disposed on the cleaning robot, images of the environment as the cleaning robot moves within the environment; extracting, with the processor, features of at least one object captured in the images; determining, with the processor, an object type of the at least one object based on the features extracted and features of different object types in an object library, wherein the possible object types comprise at least a sock, a shoe, feces, and a cord (see at least [0094-0098] disclose capturing with a camera, images of the environment as the cleaning robot moves within the environment, and recognize the objects, such as whether the object is a chair, a table, or a bed). As per claim 69, Fong et al. disclose at least some information in the object library is based on past object type classifications of objects encountered by other robots (see at least [0094-0098] disclose capturing with a camera, images of the environment as the cleaning robot moves within the environment, and recognize the objects, such as whether the object is a chair, a table, or a bed). As per claim 70, Fong et al. disclose the cleaning robot further comprises a structured light emitter; the structured light emitter emits the structured light onto the at least one object; the image sensor captures the structured light emitted onto the at least one object; the processor determines the object type of the at least one object (see at least [0094-0098] disclose capturing with a camera, images of the environment as the cleaning robot moves within the environment, and recognize the objects, such as whether the object is a chair, a table, or a bed); and the processor determines a distance to objects based on a distortion of or pixels corresponding with the reflection of the structured light emitted onto the at least one object captured in the image (see at least [0141-0143] disclose detect the distance to object). As per claim 71, Fong et al. disclose at least some information relating to the at least one object is added to the object library for use in improving future classifications of object types of objects encountered by the cleaning robot; and the application is further configured to receive at least one input designating consent to add the at least some information relating to the at least one object to the object library (see at least [0094-0098] disclose the robot can update the map to show that at time t1 there is no chair at the first location, and at time t1 there is a chair at the second location). As per claim 72, Fong et al. disclose the application is further configured to display the object type of the at least one object within the map at a location at which the at least one object was observed and a charging station of the cleaning robot within the map (see at least [0095-0098] disclose display the object type is a chair, a table, or a bed within the map, and a charging station of the cleaning robot within the map). As per claim 73, Fong et al. disclose determining, by the processor, a size of the at least one object (see at least [0095] disclose the object is a chair, a table, or a bed. It is obvious, the chair, a table, or a bed have their size). As per claim 74, Fong et al. disclose to display at least a portion of the images captured by the image sensor (see at least [0119-0121] disclose a portion of images captured by the camera). As per claim 75, Fong et al. disclose receive at least one input designating a preference associated with the at least one object; and the preference comprises at least a preference for the cleaning robot to avoid the least one object (see at least [0127-0128] disclose the robot scheduled the cleaning task at other time preference). As per claim 76, Fong et al. disclose display and suggest a no-go zone surrounding the at least one object, the no-go zone being an area the cleaning robot is not permitted to enter (see at least [0040-0041] disclose the keep-out zone or a no-touch zone). As per claim 77, Fong et al. disclose display the map in 2D and 3D; and the 3D map includes furniture and appliances within the environment (see at least [0192-0193], and [0135-0140] disclose the map includes the dining room, table, furniture). As per claim 78, Fong et al. disclose the application is configured to: receive at least one input designating an adjustment to the map; a new subarea within the map; a schedule for cleaning; an instruction to start cleaning; a no-go zone; a label for a subarea within the map; and a suction power; and display a robot status; a quantity of total area cleaned; a cleaning duration; a cleaning history; and a battery level (see at least [0003-0009] disclose input designating an update of the map; and para. [0033-0041] disclose instruction a schedule for cleaning, a label for a subarea within the map, the keep-out zone or a no-touch zone; also para. [0071] disclose the conditions of the debris bins and batteries of the robots). As per claim 79, Fong et al. disclose the application is further configured to: receive at least one input designating an addition, deletion, rotation, or movement of a boundary within the map; a quiet mode; a deep clean; a number of cleaning passes; a privacy setting; an instruction for the robot to clean an area in close proximity to a particularly labelled object; a deletion or an addition of a robot paired with the application; an instruction for the cleaning robot to empty a bin of the cleaning robot into a bin of a charging station; and an instruction for the cleaning robot to dock at the charging station; and display a debris map; issues encountered; an estimated cleaning duration required to clean the environment or a subarea of the environment; an object and object type of the object; and firmware information (see at least [0033-0041] disclose instruction a schedule for cleaning, a label for a subarea within the map, the keep-out zone or a no-touch zone; and para. [0071], and [0150], all para. disclose the conditions of the debris bins and batteries of the robots. As per claim 80, Fong et al. disclose the application is further configured to receive at least one input designating an instruction for a second robot to execute a second task after the cleaning robot completes a first task (see at least [0011-0018], [0033-0037], and [0084-0085], all para. disclose instruction for a second robot to execute a second task after the cleaning robot completes a first task). As per claim 81, Fong et al. disclose the application is further configured to receive at least one input designating a do not disturb status for the cleaning robot (see at least [0033-0041] disclose instruction a schedule for cleaning, a label for a subarea within the map, the keep-out zone or a no-touch zone). As per claim 82, Fong et al. disclose the application is further configured to: receive at least one input designating an order of coverage of rooms by the cleaning robot; and display a path of the cleaning robot (see at least [0079-0083] disclose the user designate an order of coverage of rooms for the robot cleaning). As per claim 84, Fong et al. disclose determining, by the processor, a division of the map of the environment into rooms; and the application is configured to display the map divided into the rooms (see at least [0033-0037] disclose identify the border of the rooms in the map). As per claim 85, Fong et al. disclose labeling, by the processor, rooms within the map; and the application is configured to label the rooms within the map (see at least [0117] disclose label the rooms within the map). As per claim 87, Fong et al. disclose the application is configured to execute an over the air firmware update (see at least [0092-0093] disclose the map update version and computing system). As per claim 88, Fong et al. disclose detecting, by the processor, a direction from which a verbal command is received from a user based on at least the acoustic data captured by the microphone (see at least [0076] disclose user verbal command captured by the microphone). As per claim 89, Fong et al. disclose the cleaning robot is paired with a home assistant configured to receive a verbal instruction for the cleaning robot to clean an area in close proximity to a particularly labelled object or a subarea of the environment; and the operations further comprise: executing, with the cleaning robot, the instruction (see at least [0074-0078]). As per claim 90, Fong et al. disclose the processor generates a map for each level of the environment; and the operations further comprise: determining, by the processor, the level of the environment on which the cleaning robot is located based on at least a portion of at least one of the LIDAR data and the sensor data captured by the sensors and the map generated for each level of the environment (see at least [0033-0037] disclose identify border of the plurality of rooms in the map; and para. [0116-0118]). As per claim 91, Fong et al. disclose the cleaning robot further comprises a camera; the camera captures video as the cleaning robot moves within the environment; and the application is further configured to display the video captured by the camera (see at least [0074-0078]). As per claim 92, Fong et al. disclose the cleaning robot further comprises a speaker for video conferencing (see at least [0076] disclose the robot comprises a microphone). As per claim 93, Fong et al. disclose the application is further configured to receive at least one input designating a location with the map to which the cleaning robot is to drive; and the operations further comprise: actuating, with the processor, the cleaning robot to drive to the location (see at least [0178-0179] disclose the robot moves around the house to perform the cleaning tasks). As per claim 94, Fong et al. disclose actuating, by the processor, the robot to perform work based on a detected presence or absence of the user by the processor or the application, wherein the cleaning robot is actuated to operate within the environment when the user is absent from the environment (see at least [0120-0121] disclose the robot to perform work based on a detected presence or absence of the user). As per claim 97, Fong et al. disclose determining, by the processor, a floor type of a floor on which the cleaning robot is driving based on first sensor data captured with a first sensor of the sensors; and actuating, by the processor, a vacuum or a mop of the cleaning robot to activate or deactivate based on the floor type of the floor (see at least [0108-0111] disclose robot scrubbing or mopping depended on the floor area). As per claim 98, Fong et al. disclose inferring, by the processor, locations with debris accumulation based on second sensor data captured with a second sensor of the sensors; and adjusting, by the processor, the path of the cleaning robot based on the locations with debris accumulation (see at least [0163] disclose the controller can control the vacuum cleaning robot to perform the spot cleaning operation in response to detection of the debris by the debris sensor). As per claim 99, Fong et al. disclose inferring, by the processor, an activity level within the environment based on third sensor data; and determining, by the processor, an operational schedule of the cleaning robot based on the activity level within the environment (see at least [0096] disclose level of foot traffic in the room). As per claim 100, Fong et al. disclose the operations further comprise: inferring, by the processor, an environmental characteristic of the environment based on sensor data captured by at least one sensor of the sensors; and associating, by the processor, the environmental characteristic with a location within the map corresponding with a location at which the sensor data was captured (see at least [0045], and [0135-0140]). As per claim 110, Fong et al. disclose propose a suggested schedule for operating the cleaning robot comprising at least one date and time; and receive at least one input designating approval of the suggested schedule; and the operations further comprise: actuating, by the processor, the cleaning robot to clean according to the suggested schedule, wherein the processor only actuates the cleaning robot to clean according to the suggested schedule after approval of the suggested schedule (see at least [0071-0073] disclose suggested schedule for operating the cleaning robot comprising days, weeks, months or years; and para. [0112-0113]). As per claims 111-112, Fong et al. disclose the suggested schedule is inferred using a machine learning algorithm; and the machine learning algorithm uses at least a plurality of user inputs historically provided to the application to infer the suggested schedule (see at least [0127-0128] disclose user inputs the suggested schedule for the cleaning robot). As per claim 113, Fong et al. disclose determines the suggested schedule based on a plurality of user inputs designating at least a plurality of schedules previously executed by the cleaning robot at a particular past date and time specified in each of the plurality of schedules (see at least [0071-0072], and [0127-0128] disclose user inputs designating at least a plurality of schedules previously executed by the cleaning robot). As per claim 114, Fong et al. disclose the cleaning robot provides a notification to a user by at least one of generating a noise, a visual light indicator, or transmitting the notification to the application (see at least [0147] disclose the cleaning robot provided a notification to a user by causing emission of an audible alert). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 44 and 67, are rejected under 35 U.S.C. 103 as being unpatentable over Fong et al. (US 2019/0212752 A1) in view of Munich et al. (US 2016/0147230 A1). As per claims 44, and 67, Fong et al. do not explicitly disclose actuating the robot to traverse the linear segments. However, Munich et al. disclose actuating the robot to traverse through various combinations of movements (see at least [0082-0083]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining actuating, by the processor, the cleaning robot to traverse a first linear segment; actuating, by the processor, the cleaning robot to rotate 180 degrees in a first rotation comprising traversing a first distance in a direction perpendicular to the first linear segment after starting the first rotation and before finishing the first rotation; actuating, by the processor, the cleaning robot to traverse a second linear segment parallel to the first linear segment; and actuating, by the processor, the cleaning robot to rotate 180 degrees in a second rotation comprising traversing a second distance in a direction perpendicular to the second linear segment after starting the second rotation and before finishing the second rotation, in order for the robot use in simultaneous location and mapping to perform cleaning tasks effectively. Claims 51, 58-60, 62-63, 83, and 101-109, are rejected under 35 U.S.C. 103 as being unpatentable over Fong et al. (US 2019/0212752 A1) in view of Romanov et al. (US 2011/0202175 A1). As per claims 51 and 83, Fong et al. do not explicitly disclose how often the cleaning robot is to empty a bin. However, Romanov et al. disclose the robot returns to one or more stations that allow it to refresh its cleaning mechanism based on area coverage and time of operations (see at least [0425-0427]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining receive at least one input designating how often the cleaning robot is to empty a bin of the cleaning robot into a bin of a charging station, in order for the robot to refresh its cleaning mechanism. As per claim 58, Fong et al. do not disclose a fluid reservoir for storing a cleaning fluid. However, Romanov et al. disclose a vacuum; a fluid reservoir for storing a cleaning fluid; and a cloth for receiving the cleaning fluid, wherein the cloth is oriented toward a floor surface (see at least [0178-0180], and [0377-0383]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining fluid reservoir for storing fluid for the robot to perform cleaning operation in the floor area effectively. As per claims 59-60, Fong et al. do not explicitly disclose the engaging and disengaging the cloth. However, Romanov et al. disclose the cleaning robot further comprises a means for engaging and disengaging at least the cloth by moving the at least the cloth towards a driving surface of the cleaning robot and away from the driving surface, respectively; the cloth contacts the driving surface when the at least the mopping cloth is engaged and the cloth cannot contact the driving surface when the at least the mopping cloth is disengaged; the at least the cloth is disengaged when a type of the driving surface is carpet; and the operations further comprise: actuating, with processor, the at least the cloth to engage or disengage based on sensor data captured by at least one sensor of the sensors; and a means to move at least the cloth back and forth in a plane parallel to the floor surface (see at least [0377-0383]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining the engaging and disengaging the cloth for the robot to perform cleaning operation in the floor area to ensure good overall coverage of the room or areas to be cleaned. As per claims 62, and 108, Fong et al. do not disclose vibrating the cloth. However, Romanov et al. disclose a means for vibrating at least the cloth during operation.(see at least [0377-0383]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining vibrating the cloth for cleaning the area effectively. As per claims 63, and 109, Fong do not explicitly disclose a predetermined quantity of the cleaning fluid is delivered to the cloth at predetermined intervals. However Romanov et al. disclose predetermined quantity of the cleaning fluid is delivered to the cloth at predetermined intervals (see at least [0425-0427]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining a predetermined quantity of the cleaning fluid is delivered to the cloth at predetermined intervals to ensure the cleaning area is completely clean effectively. As per claim 101, Fong et al. disclose the cleaning robot comprises a bin for collecting dust; the bin of the cleaning robot comprises a first mechanism for emptying the bin of the cleaning robot manually and at least a portion of a second mechanism for emptying the bin of the cleaning robot automatically to a second bin via an air path from the first bin to the second bin (see at least [0154-0156]). Fong et al. do not explicitly disclose the charging station houses the second bin, the robot charges its battery after emptying the bin, separate the bin from all electrical components of the cleaning robot to wash the bin of the cleaning robot. However, Romanov et al. disclose a charging station of the cleaning robot houses the second bin; the cleaning robot charges its battery after emptying the bin of the cleaning robot or concurrently while emptying the bin of the cleaning robot; and the first mechanism is used to separate the bin from all electrical components of the cleaning robot to wash the bin of the cleaning robot (see at least [0205], and [0448-0451]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining the charging station houses the second bin, the robot charges its battery after emptying the bin, and separate the bin from all electrical components of the cleaning robot to wash the bin of the cleaning robot, in order to perform cleaning operation successfully. As per claims 102-103, Fong et al. do not disclose the charging station refill a fluid reservoir, and collect and store the waste liquid in the liquid container. However, Romanov et al. disclose the charging station further comprises a first liquid container for storing cleaning fluid; and the charging station is configured to refill a fluid reservoir of the cleaning; and the charging station further comprises a second liquid container for storing waste liquid; and the charging station is further configured to collect and store the waste liquid in the second liquid container (see at least [0360-0363]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combing the charging station is configured to refill a fluid reservoir of the cleaning robot; and the charging station further comprises a second liquid container for storing waste liquid; and the charging station is further configured to collect and store the waste liquid in the second liquid container, in order for the robot operate the cleaning operation in the area. As per claim 104, Romanov et al. also disclose a vacuum; a fluid reservoir for storing a cleaning fluid; and a cloth for receiving the cleaning fluid, wherein the cloth is oriented toward a floor surface (see at least [0178-0180], and [0323-0333]). As per claims 105-106, Fong et al. do not explicitly disclose the engaging and disengaging the cloth. However, Romanov et al. disclose the cleaning robot further comprises a means for engaging and disengaging at least the cloth by moving the at least the cloth towards a driving surface of the cleaning robot and away from the driving surface, respectively; the cloth contacts the driving surface when the at least the mopping cloth is engaged and the cloth cannot contact the driving surface when the at least the mopping cloth is disengaged; the at least the cloth is disengaged when a type of the driving surface is carpet; and the operations further comprise: actuating, with processor, the at least the cloth to engage or disengage based on sensor data captured by at least one sensor of the sensors; and a means to move at least the cloth back and forth in a plane parallel to the floor surface (see at least [0377-0383]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining the engaging and disengaging the cloth for the robot to perform cleaning operation in the floor area to ensure good overall coverage of the room or areas to be cleaned. As per claim 107, Fong et al. disclose at least one ultrasonic oscillator (see at least [0156]). Claims 55-56, and 95-96, are rejected under 35 U.S.C. 103 as being unpatentable over Fong et al. (US 2019/0212752 A1) in view of Reindle et al. (US 2006/0085095 A1). As per claims 55, and 95, Fong et al. disclose determining, by the processor, a floor type of a floor on which the cleaning robot is driving based on first sensor data captured with a first sensor of the sensors (see at least [0084-0087]). Fong et al. do not explicitly disclose actuating, by the processor, an adjustment to a vacuum suction power of the cleaning robot based on the floor type of the floor. However, Reindle et al. disclose actuating, by the processor, an adjustment to a vacuum suction power of the cleaning robot based on the floor type of the floor (see at least [0063], [0085-0086], and [0144]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining an adjustment to a vacuum suction power of the cleaning robot based on the floor type, in order to control the amount of suction being drawn to vacuum depended on the type of floor. As per claims 56, and 96, Fong et al. disclose determining, by the processor, a floor type of a floor on which the cleaning robot is driving based on first sensor data captured with a first sensor of the sensors (see at least [0084-0087]). Fong et al. do not explicitly disclose an adjustment to a height of a brush of the cleaning robot relative to the floor based on the floor type of the floor. However, Reindle et al. disclose actuating, by the processor, an adjustment to a height of a brush of the cleaning robot relative to the floor based on the floor type of the floor (see at least [0050], [0100], [0102], and [0122-0124]). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining an adjustment to a height of a brush of the cleaning robot relative to the floor based on the floor type, in order to provide optimum cleaning, avoid damage to the vacuum cleaner. Claim 86, is rejected under 35 U.S.C. 103 as being unpatentable over Fong et al. (US 2019/0212752 A1). As per claim 86, Fong et al. disclose display rooms in the map (see at least [0079-0082]. Fong et al. do not disclose display rooms with different colors. However, this is just an engineering design choice in order to differentiate different type of rooms in the map of the robot cleaning areas for cleaning operation. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify the teach of Fong et al. by combining display rooms within the map in different colors, and each room is a different color that every other room within the map, for dividing different areas in the map effectively for easy cleaning operation. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: .Eickenberg et al. (US 2021/0373564 A1) . Zhao et al. (US 2021/0190513 A1) . Hu et al. (11034028) . Liu et al. (US 2021/0063577 A1) Any inquiry concerning this communication or earlier communications from the examiner should be directed to DALENA TRAN whose telephone number is (571)272-6968. The examiner can normally be reached M-F 7AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ADAM MOTT can be reached at 571-270-5376. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DALENA TRAN/Primary Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Aug 08, 2023
Application Filed
Jan 06, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12600031
TACTILE ROBOTIC TRAINING PLATFORM
2y 5m to grant Granted Apr 14, 2026
Patent 12594664
TERMINAL DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12569980
ROBOT SYSTEM COMPRISING ROBOT EQUIPPED WITH DISPLAY UNIT
2y 5m to grant Granted Mar 10, 2026
Patent 12552032
AUGMENTED REALITY-BASED TASK MANAGEMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12544915
SYSTEMS AND METHODS FOR ACQUIRING AND MOVING OBJECTS HAVING COMPLEX OUTER SURFACES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
97%
With Interview (+9.7%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1076 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month