Prosecution Insights
Last updated: April 19, 2026
Application No. 18/963,229

ROBOT CONTROL METHOD, ROBOT AND STORAGE MEDIUM

Non-Final OA §102§103§112§DP
Filed
Nov 27, 2024
Examiner
AZHAR, ARSLAN
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ecovacs Robotics Co. Ltd.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
144 granted / 187 resolved
+25.0% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
30 currently pending
Career history
217
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
19.6%
-20.4% vs TC avg
§112
16.3%
-23.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 187 resolved cases

Office Action

§102 §103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/27/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-5 and 19 are considered indefinite since they contain the conditional clause “if” creating an uncertainty wherein an event can occur or not, hence the use of this clause renders the claims indefinite and the scope of the claim is unascertainable because the use of conditional term “if” is linked to two options: Yes or Not, so the claim must provide the result of both possibilities: "Only after X happens will Y happen." The condition is a singular event that triggers the indicated response. The speaker does not know whether the activity in the “if” phrase will occur. Hence, One suggested option to correct this issue is using the word “when” instead of term “if”. That is, "Anytime X occurs, Y results." That X will occur is expected (when). The speaker knows that the activity in “when” phrase is likely to occur. “When” expresses more certainty than “if”. In other words, the use of “if” is to introduce a possible or unreal situation or condition, while the use of “when” refers to the time of a future situation or condition that we are certain of Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-4, 9-11, 14 and 16-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Artes (US 20200150655, disclosed in IDS submitted on 11/27/2024). For claim 1, Artes teaches: A robot control method (abstract, disclosing a method for controlling a robot), comprising: determining whether a position of a robot when the robot is hijacked and a position when the robot is released from being hijacked belong to different rooms ([0118], disclosing starting from the position at which the user puts down the robot, the robot begins a global self-localization. Based on the travelled path recorded while carrying out the global self-localization, the starting position of the robot can be determined and it can thus be established whether the user had started the robot 100 in a virtual exclusion region S. Afterwards, the robot can carry out a task in the virtual exclusion region S as specified by the user. As robot itself does not enter virtual exclusion region, when it detects that it has been hijacked from non-exclusion region and placed in exclusion region, it determines that it needs to execute a task in this region. As it cannot autonomously enter the exclusion region, it determines that user has placed it to work in the region. [0105], disclosing exclusion region being a room); if the position of the robot when the robot is hijacked and the position when the robot is released from being hijacked belong to different rooms, executing a task within a room including the position when the robot is released from being hijacked ([0004-0118], disclosing when robot determines it has been released in a different room, it executes task in the different room). For claim 2, Artes teaches: The method according to claim 1, wherein the task executed in the room including the position when the robot is released from being hijacked is same with a task executed in a room including the position of the robot when the robot is hijacked ([0118], disclosing a cleaning robot hence it performs cleaning task before and after release). For claim 3, Artes teaches: The method according to claim 1, wherein the task executed in the room including the position when the robot is released from being hijacked is different with a task executed in a room including the position of the robot when the robot is hijacked ([0028], disclosing robot performs one or more tasks such as, for example, the cleaning or monitoring of the area of robot deployment or the transport of objects within the area of robot deployment. [0030], disclosing robot can charge at base station. Charging its batteries is also executing a task and when it is hijacked from base station and released in exclusion region, a different task i.e., cleaning/vacuuming is performed). For claim 4, Artes teaches: The method according to claim 1, wherein the robot is a sweeping robot ([0042], disclosing a sweeping robot), and the executing a task within a room including the position when the robot is released from being hijacked, comprises: adopting a sweeping mode same with a sweeping mode when the robot being hijacked to execute the task at the position when the robot is released from being hijacked ([0028], disclosing robot performs one or more tasks such as, for example, the cleaning or monitoring of the area of robot deployment or the transport of objects within the area of robot deployment). For claim 9, Artes teaches: The method according to claim 1, wherein, prior to determining whether a position of a robot when the robot is hijacked and a position when the robot is released from being hijacked belong to different rooms, further comprises: determining the position of the robot when the robot is hijacked ([0036], disclosing robot performs localization. And [0115], disclosing robot loses its position when manually moved. Hence robot’s position is known i.e., determined when it is hijacked); and determining the position when the robot is released from being hijacked ([0115], disclosing deployment scenario of an autonomous mobile robot 100 can include it being manually moved, which normally results in the robot losing the information about its own position on the electronic map. After being moved, the robot can once again determine its position by means of global self-localization). For claim 10, Artes teaches: The method according to claim 9, wherein, the determining the position when the robot is released from being hijacked, comprises: determining the position when the robot is released from being hijacked based on a relocalization operation ([0115], disclosing . After being moved, the robot can once again determine its position by means of global self-localization). For claim 11. Artes teaches: The method according to claim 10, wherein the determining the position when the robot is released from being hijacked based on a relocalization operation, comprises: moving to a position different from a position when the robot is released from being hijacked; executing the relocalization operation during the moving process, to determine the position when the robot is released from being hijacked ([0115], disclosing After being moved, the robot can once again determine its position by means of global self-localization. To do so the robot 100 moves about the surrounding area to collect information about it with the aid of its sensors and then compares the data with the existing map data). For claim 14, Atres teaches: A robot, comprising: a mechanical body (figure 1 A and [0029], disclosing a mobile robot), wherein the mechanical body is provided with one or more sensors ([0035], disclosing autonomous mobile robot 100 includes a sensor unit 120 that has various sensors), one or more processors and one or more memories for storing computer instructions ([0036], disclosing control unit 150 can be configured to provide all functionalities needed for the autonomous mobile robot 100 to be able to move autonomously through its area of deployment and carry out its tasks. For this purpose, for example, the control unit 150 includes, for example, a processor 155 and a memory 156 for executing a control software); and the one or more processors for executing the computer instructions for achieving the method according to claim 1 ([0036], disclosing control unit 150 can be configured to provide all functionalities needed for the autonomous mobile robot 100 to be able to move autonomously through its area of deployment and carry out its tasks. For this purpose, for example, the control unit 150 includes, for example, a processor 155 and a memory 156 for executing a control software). For claim 16, Artes teaches: A robot control method, comprising: determining, by a robot, a position when the robot is released from being hijacked based on relocalization operation ([0115], disclosing deployment scenario of an autonomous mobile robot 100 can include it being manually moved, which normally results in the robot losing the information about its own position on the electronic map. After being moved, the robot can once again determine its position by means of global self-localization); determining, by the robot, a task execution area according to environmental information around the position when the robot is released from being hijacked ([0115], disclosing After being moved, the robot can once again determine its position by means of global self-localization. To do so the robot 100 moves about the surrounding area to collect information about it with the aid of its sensors and then compares the data with the existing map data); and executing, by the robot, a task within the task execution area ([0118], disclosing Based on the travelled path recorded while carrying out the global self-localization, the starting position of the robot can be determined and it can thus be established whether the user had started the robot 100 in a virtual exclusion region S. Afterwards, the robot can carry out a task in the virtual exclusion region S as specified by the user), wherein, prior to the determining, by the robot, the task execution area according to the environmental information around the position when the robot is released from being hijacked, the robot control method further comprises: determining that the robot needs to execute the task at the position when the robot is released from being hijacked according to a difference between the position when the robot is released from being hijacked and a position when the robot is hijacked ([0090], disclosing robot independently defines an exclusion region and only cleans it when explicitly commanded by user to do so. Furthermore [0118], disclosing starting from the position at which the user puts down the robot, the robot begins a global self-localization. Based on the travelled path recorded while carrying out the global self-localization, the starting position of the robot can be determined and it can thus be established whether the user had started the robot 100 in a virtual exclusion region S. Afterwards, the robot can carry out a task in the virtual exclusion region S as specified by the user. As robot itself does not enter virtual exclusion region, when it detects that it has been hijacked from non-exclusion region and placed in exclusion region, it determines that it needs to execute a task in this region). For claim 17, Artes teaches: The robot control method according to claim 16, wherein the determining, by the robot, the position when the robot is released from being hijacked based on the relocalization operation, comprises: obtaining the environmental information around a current position when the robot recognizes that the robot is released from being hijacked ([0015], disclosing after being moved, the robot can once again determine its position by means of global self-localization. To do so the robot 100 moves about the surrounding area to collect information about it with the aid of its sensors and then compares the data with the existing map data. [0035], disclosing robot has one or more sensors for gathering information about the environment of the robot such as, e.g. the location of obstacles in the area of robot deployment. Therefore robot obtains environmental information around current position); and locating a pose of the robot in a stored environmental map according to the environmental information around the current position, and taking a position in the pose as the position when the robot is released from being hijacked ([0115], disclosing after being moved, the robot can once again determine its position by means of global self-localization. disclosing robot moves about the surrounding area to collect information about it with the aid of its sensors and then compares the data with the existing map data. [0036], disclosing self-localization of the robot on the map is performed through SLAM algorithm, therefore pose of robot is located in the stored map; [0081] disclosing determining the pose of the robot). For claim 18, Artes teaches: The robot control method according to claim 16, wherein the determining, by the robot, the position when the robot is released from being hijacked based on the relocalization operation, comprises: moving from the current position to a second position when the robot recognizes that the robot is released from being hijacked, and locating the pose of the robot in the stored environmental map in a moving process ([0115], disclosing after the robot has been moved, for self-localization it moves about the surrounding area to collect information about it with the aid of its sensors and then compares the data with the existing map data. As it moves for self-localization, it moves from current position to a second position); and determining a position where the robot starts moving as the position when the robot is released from being hijacked, according to a position in the pose and data acquired in the moving process ([0118], disclosing starting from the position at which the user puts down the robot, the robot begins a global self-localization. Based on the travelled path recorded while carrying out the global self-localization, the starting position of the robot can be determined and it can thus be established whether the user had started the robot 100 in a virtual exclusion region S. Afterwards, the robot can carry out a task in the virtual exclusion region S as specified by the user). l. For claim 19 Artes teaches: The robot control method according to claim 16, wherein the determining that the robot needs to execute the task at the position when the robot is released from being hijacked according to the difference between the position when the robot is released from being hijacked and the position when the robot is hijacked comprises at least one of situations as follows: if the position when the robot is released from being hijacked and the position when the robot is hijacked belong to different environmental areas, determining that the robot needs to execute the task at the position when the robot is released from being hijacked ([0118], disclosing starting from the position at which the user puts down the robot, the robot begins a global self-localization. Based on the travelled path recorded while carrying out the global self-localization, the starting position of the robot can be determined and it can thus be established whether the user had started the robot 100 in a virtual exclusion region S. Afterwards, the robot can carry out a task in the virtual exclusion region S as specified by the user. As robot itself does not enter virtual exclusion region, when it detects that it has been hijacked from non-exclusion region and placed in exclusion region, it determines that it needs to execute a task in this region. As it cannot autonomously enter the exclusion region, it determines that user has placed it to work in the region); if the position when the robot is hijacked is located in a robot running difficulty area and the position when the robot is released from being hijacked is located outside the robot running difficulty area, determining that the robot needs to execute the task at the position when the robot is released from being hijacked ([0127], disclosing if robot unintentionally enters a virtual exclusion region S. and none of the possible ways of exiting the exclusion region S described above are given, it stands still (emergency stop) and awaits user intervention in order to avoid any undesired behavior. The same applies to situations in which the robot cannot liberate itself on its own or in which the robot can no longer move. Virtual exclusion region is a robot difficulty area as it is not allowed to autonomously navigate in it. Thus it will not exit the region itself and user will pick it up and place it outside the exclusion region and robot will determine that it has to perform task at present location); and if the position when the robot is hijacked is a charging station position and the position when the robot is released from being hijacked is not a charging station position, determining that the robot needs to execute the task at the position when the robot is released from being hijacked ([0030], disclosing robot automatically returns to base station to charge its batteries. [0118], disclosing starting from the position at which the user puts down the robot, the robot begins a global self-localization. Based on the travelled path recorded while carrying out the global self-localization, the starting position of the robot can be determined and it can thus be established whether the user had started the robot 100 in a virtual exclusion region S. Afterwards, the robot can carry out a task in the virtual exclusion region S as specified by the user. Therefore user may hijack it from charging station position and put in exclusion region to work). m. For claim 20, Artes teaches: The robot control method according to claim 16, wherein the robot is a sweeping robot ([0002] robot is a vacuum and/or sweeping robot), and the determining, by the robot, the task execution area according to the environmental information around the position when the robot is released from being hijacked, comprises: determining, by the sweeping robot, the to-be-swept area according to the environmental information around the position when the sweeping robot is released from being hijacked ([0010-0013], disclosing robot determines if exclusion region is active or inactive and when exclusion region is active, robot does not move into it. [0115], disclosing robot can be manually moved and after being moved it determines position. Robot is hijacked when it is manually moved. And it performs self localization to determine its position and confirms if it is in exclusion region or not. [0118], disclosing starting from the position at which the user puts down the robot, the robot begins a global self-localization. Based on the travelled path recorded while carrying out the global self-localization, the starting position of the robot can be determined and it can thus be established whether the user had started the robot 100 in a virtual exclusion region S. Afterwards, the robot can carry out a task in the virtual exclusion region S as specified by the user. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 13 and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gutmann (US 20130138247, disclosed in IDS submitted on 11/27/2024). For claim 13, Gutmann teaches: A robot control method (abstract, disclosing robot rontrol method), comprising: if a position of a robot when a robot is hijacked and a position when a robot is released from being hijacked belong to same environmental area, returning to the position of the robot when the robot is hijacked ([0316], disclosing a user may pick up an autonomous floor cleaner to resupply the cleaner with cleaning fluid, to change a cloth or wiper, to empty out a receptable, etc. It would be desirable to have the autonomous floor cleaner restart where it had left off (resume) rather than start all over again); and executing a task within the position of the robot when the robot is hijacked ([0316], disclosing resuming its task at position when it was hijacked). For claim 15, Gutmann teaches: A robot, comprising: a mechanical body (figure 1 and [0070], disclosing a robot with mechanical body), wherein the mechanical body is provided with one or more sensors ([0073], disclosing mobile body has sensor 170), one or more processors and one or more memories for storing computer instructions ([0085], disclosing mobile device 100 has processor executing software. Software for implementing aspects of what is disclosed would typically be stored in ROM, flash memory, or some other form of persistent storage, although volatile storage may be used as well); and the one or more processors for executing the computer instructions for achieving the method according to claim 13 ([0085], disclosing processor executing software to control mobile device). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Artes in view of Santini (US 20160235270). For claim 5, Artes teaches: The method according to claim 1, wherein the robot is a sweeping robot ([0042], disclosing a sweeping robot), and the executing a task within a room including the position when the robot is released from being hijacked, comprises: Artes does not explicitly teach: adopting a sweeping mode different from a sweeping mode when the robot being hijacked to execute the task at the position when the robot is released from being hijacked. Santini teaches adopting a sweeping mode different from a sweeping mode when the robot being hijacked to execute the task at the position when the robot is released from being hijacked ([0033], disclosing w hen the floor cleaning robot detects a change from a hard floor surface to a soft floor surface, it automatically increases its vacuum suction to maintain consistent cleaning effectiveness. In the opposite case—a detected change from a soft floor surface to a hard floor surface—the floor cleaning robot may automatically decrease its vacuum suction to optimize mission duration and improving user experience on sound reflective surfaces. By selectively increasing/decreasing vacuum power, the robot can extend battery life and therefore perform longer cleaning missions. Changing vacuum suction is adopting a different sweeping mode) Santini and Artes are analogous arts as they are in same field of endeavor i.e., autonomous robots. It would have been obvious to one having ordinary skill in the art before effective filing date of claimed invention to modify art of Artes to adopting a sweeping mode different from a sweeping mode when the robot being hijacked to execute the task at the position when the robot is released from being hijacked as taught by Santini to optimize mission duration and improve user experience. Claims 6, 7, 8 are rejected under 35 U.S.C. 103 as being unpatentable over Artes in view of Gutmann. For claim 6, Artes teaches: The method according to claim 1, Although resuming task after robot is released from being hijacked is necessary for any autonomous cleaner/vacuum, Artes does not explicitly disclose: wherein if the position of the robot when the robot is hijacked and the position when the robot is released from being hijacked belong to same room, executing a task within a room including the position of the robot when the robot is hijacked. Gutmann teaches: if the position of the robot when the robot is hijacked and the position when the robot is released from being hijacked belong to same room, executing a task within a room including the position of the robot when the robot is hijacked (abstract, disclosing robot resumed operation after it has been kidnapped of paused. Embodiments of the invention can comprise commercial low-cost products including robots for the autonomous cleaning of floors. [0258], disclosing for a cleaning robot it is desirable for the user to have an option for pausing and resuming the robot, e.g. for emptying the dust bin or changing the cleaning pad. Once resumed, the robot should quickly find its position in the area already explored and continue with its navigation. [0316], disclosing a user may pick up an autonomous floor cleaner to resupply the cleaner with cleaning fluid, to change a cloth or wiper, to empty out a receptable, etc. It would be desirable to have the autonomous floor cleaner restart where it had left off (resume) rather than start all over again) Gutmann and Artes are analogous arts as they are in same field of endeavor i.e., autonomous cleaning robots. It would have been obvious to one having ordinary skill in the art before effective filing data of claimed invention to modify art of Artes to if the position of the robot when the robot is hijacked and the position when the robot is released from being hijacked belong to same room, executing a task within a room including the position of the robot when the robot is hijacked as taught by Gutmann to allow robot to restart where it left off rather that start all over again. For claim 7, modified Artes teaches: The method according claim 6, wherein the executing a task within a room including the position of the robot when the robot is hijacked, comprises: returning to the position of the robot when the robot is hijacked; and continuing a task not completed before the robot being hijacked at the position of the robot when the robot is hijacked (modification through Gutmann teaches returning to position when the robot is hijacked and continuing task not completed. See claim 6 for modification). For claim 8, modified Artes teaches: The method according claim 6, wherein the executing a task within a room including the position of the robot when the robot is hijacked, comprises: continuing a task not completed before the robot being hijacked at the position when the robot is released from being hijacked (modification through Gutmann teaches robot resumes its operation when user releases it. Resuming the task after robot is released is continuing a task not completed before the robot being hijacked at the position when the robot is released). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 16-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-15 of U.S. Patent No. 11534916. Although the claims at issue are not identical, they are not patentably distinct from each other because Claim of Instant Application Claim of US Patent 11534916 16. A robot control method, comprising: determining, by a robot, a position when the robot is released from being hijacked based on relocalization operation; determining, by the robot, a task execution area according to environmental information around the position when the robot is released from being hijacked; and executing, by the robot, a task within the task execution area, wherein, prior to the determining, by the robot, the task execution area according to the environmental information around the position when the robot is released from being hijacked, the robot control method further comprises: determining that the robot needs to execute the task at the position when the robot is released from being hijacked according to a difference between the position when the robot is released from being hijacked and a position when the robot is hijacked. 1. A robot control method, comprising: determining, by a robot, a position when the robot is released from being hijacked based on relocalization operation; determining, by the robot, according to environmental information around the position when the robot is released from being hijacked, the position when the robot is released from being hijacked as a position to continue a task executed before the robot being hijacked and a task area to be executed; and executing, by the robot, the task within the task area to be executed; wherein the environmental information is obtained by sensor provided on the robot; wherein the robot is a sweeping robot, and the determining, by the robot, the task area to be executed according to the environmental information around the position when the robot is released from being hijacked, comprises: determining, by the sweeping robot, a to-be-swept area according to the environmental information around the position when the sweeping robot is released from being hijacked; wherein the determining, by the sweeping robot, the to-be-swept area according to the environmental information around the position when the sweeping robot is released from being hijacked, further comprises: recognizing whether the environmental information around the position when the sweeping robot is released from being hijacked contains a corner; and if the corner is contained, determining an associated area of the corner as the to-be- swept area; and wherein the determining the associated area of the corner as the to-be-swept area comprises: determining a sector area by taking a vertex of the corner as a circle center and a distance from the position when the sweeping robot is released from being hijacked to the vertex of the corner as a radius, and taking the sector area as the to-be-swept area; or determining a rectangular area by taking a distance from the position when the sweeping robot is released from being hijacked to any side of the corner as a half side length, and taking the rectangular area as the to-be-swept area. 17. The robot control method according to claim 16, wherein the determining, by the robot, the position when the robot is released from being hijacked based on the relocalization operation, comprises: obtaining the environmental information around a current position when the robot recognizes that the robot is released from being hijacked; and locating a pose of the robot in a stored environmental map according to the environmental information around the current position, and taking a position in the pose as the position when the robot is released from being hijacked. 2…The robot control method according to claim 1, wherein the determining, by the robot, the position when the robot is released from being hijacked based on the relocalization operation, comprises: obtaining the environmental information around a current position when the robot recognizes that the robot is released from being hijacked; and locating a pose of the robot in a stored environmental map according to the environmental information around the current position, and taking a position in the pose as the position when the robot is released from being hijacked. 18. The robot control method according to claim 16, wherein the determining, by the robot, the position when the robot is released from being hijacked based on the relocalization operation, comprises: moving from the current position to a second position when the robot recognizes that the robot is released from being hijacked, and locating the pose of the robot in the stored environmental map in a moving process; and determining a position where the robot starts moving as the position when the robot is released from being hijacked, according to a position in the pose and data acquired in the moving process. The robot control method according to claim 1, wherein the determining, by the robot, the position when the robot is released from being hijacked based on the relocalization operation, comprises: moving from the current position to a second position when the robot recognizes that the robot is released from being hijacked, and locating the pose of the robot in [[the]]a stored environmental map in a moving process; and determining a position where the robot starts moving as the position when the robot is released from being hijacked, according to a position in the pose and data acquired in the moving process. 19. The robot control method according to claim 16, wherein the determining that the robot needs to execute the task at the position when the robot is released from being hijacked according to the difference between the position when the robot is released from being hijacked and the position when the robot is hijacked comprises at least one of situations as follows: if the position when the robot is released from being hijacked and the position when the robot is hijacked belong to different environmental areas, determining that the robot needs to execute the task at the position when the robot is released from being hijacked; if the position when the robot is hijacked is located in a robot running difficulty area and the position when the robot is released from being hijacked is located outside the robot running difficulty area, determining that the robot needs to execute the task at the position when the robot is released from being hijacked; and if the position when the robot is hijacked is a charging station position and the position when the robot is released from being hijacked is not a charging station position, determining that the robot needs to execute the task at the position when the robot is released from being hijacked. 5. The robot control method according to claim 4, wherein the determining that the robot needs to execute the task at the position when the robot is released from being hijacked according to the difference between the position when the robot is released from being hijacked and the position when the robot is hijacked comprises at least one of situations as follows: if the position when the robot is released from being hijacked and the position when the robot is hijacked belong to different environmental areas, determining that the robot needs to execute the task at the position when the robot is released from being hijacked; if the position when the robot is hijacked is located in a robot running difficulty area and the position when the robot is released from being hijacked is located outside the robot running difficulty area, determining that the robot needs to execute the task at the position when the robot is released from being hijacked; and if the position when the robot is hijacked is a charging station position and the position when the robot is released from being hijacked is not a charging station position, determining that the robot needs to execute the task at the position when the robot is released from being hijacked. 20. The robot control method according to claim 16, wherein the robot is a sweeping robot, and the determining, by the robot, the task execution area according to the environmental information around the position when the robot is released from being hijacked, comprises: determining, by the sweeping robot, the to-be-swept area according to the environmental information around the position when the sweeping robot is released from being hijacked. 1…wherein the robot is a sweeping robot, and the determining, by the robot, the task area to be executed according to the environmental information around the position when the robot is released from being hijacked, comprises: determining, by the sweeping robot, a to-be-swept area according to the environmental information around the position when the sweeping robot is released from being hijacked Allowable Subject Matter Claim 12 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Examiner interprets the claim as “robot is released at position A, it moves away from A to determine its pose in stored environmental map, it acquires the path it has moved. And it returns to position A after the pose has been determined”. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ARSLAN AZHAR whose telephone number is (571)270-1703. The examiner can normally be reached Mon-Fri 7:30 - 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ARSLAN AZHAR/Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Nov 27, 2024
Application Filed
Feb 15, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589495
METHOD FOR DETERMINING POSE OF ROBOT, ROBOT AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12589500
METHOD AND DEVICE FOR ANNOTATING IMAGES OF AN OBJECT CAPTURED USING A CAMERA
2y 5m to grant Granted Mar 31, 2026
Patent 12589497
Monitoring System and Method for Operating the System
2y 5m to grant Granted Mar 31, 2026
Patent 12583111
Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation
2y 5m to grant Granted Mar 24, 2026
Patent 12576514
ROBOT CONTROL METHOD, ROBOT, AND CONTROL TERMINAL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
98%
With Interview (+20.8%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 187 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month