DETAILED ACTION
Claims 1-20 are pending in this application. Claims 1-20 have been examined with the priority date of 10/07/2022 in accordance with the applicant’s claim for foreign priority. Claims 1-8 and 10-19 are amended, claims 9 and 20 are canceled and claim 21 is newly added.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 08/03/2023, 01/09/2024, and 12/27/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Response to Arguments
Claim interpretation under 35 U.S.C. 112(f)
Applicant’s arguments (see Remarks filed 11/14/2025) have been fully considered by the examiner. The arguments with respect to “sensor unit” and “electronic device” are persuasive, and accordingly the 112(f) interpretations have been withdrawn for these limitations, however the arguments made for the recited limitation of “electronic device” are not persuasive. For the limitation of “external device”, as claimed “external device” constitutes a generic place holder, meeting prong A of the analysis for invoking 112(f). Further, as recited by claims 3 and 13, the external device is in communication with the robot cleaner, and stores a second spatial map, these limitations constitute functional language modifying the placeholder without sufficient structure, material or acts for performing the claimed language, which satisfies prongs B and C of the analysis for invoking 112(f) (See MPEP section 2181, subsection I.). Therefore, for at least the reasons above, the examiner maintains the claim interpretations made to claims 3 and 13 under 35 U.S.C. 112(f).
Claim rejections under 35 U.S.C. 102(a)(1)
Applicant’s arguments (see Remarks filed 11/14/2025) with respect to rejection under 35 U.S.C. 102(a)(1) have been fully considered by the examiner and are persuasive. In view of the newly added amendments to the claims, a new grounds of rejection is presented over of Sinyavskiy and in further view of Choi and fully discussed below.
Claim rejections under 35 U.S.C. 103
Applicant’s arguments (see Remarks filed 11/14/2025) with respect to rejection under 35 U.S.C. 103 have been fully considered by the examiner and are not persuasive. The applicant argues that a prima facie case for obviousness for the combination of Sinyavskiy and Choi has not been made. The examiner disagrees, in order for a prima facie case for obviousness to have been made the examiner must make the case from the standpoint of one of ordinary skill in the art at the time of the invention, independent of the applicant’s specification (MPEP 2142) and the combination modifications must not render the prior art unsatisfactory for its intended purposes nor change the principle of operation (MPEP 2143.01).
Sinyavskiy teaches a robot cleaner which can map its surrounding to autonomously navigate obstacles, as well as determine attribute of the obstacle itself (Sinyavskiy [0045] and [0059] for mapping and navigation, and [0011] and [0081] for attribute of obstacles), however it does not teach a function in which attributes about the obstacle’s ability to move or be moved can be determined. The motivation for determination of this information lies in that obstacles can prevent cleaning robots from completing a task and having information of the obstacle type or object type can help expedite the process of removing or moving he obstacle, or determining how to proceed (Sinyavskiy, [0051]- [0052]). Choi teaches in a similar application a system of robot cleaner navigation; however, Choi teaches that the robot is capable of identifying the obstacle, and determining whether it can be moved such that the process of moving the obstacle and getting the robot back to its task can be expedited (Choi column 14, 18 and 20-21). Given that Sinyavskiy teaches that identifying and determining obstacle attributes and determining that an obstacle has been moved as being advantageous and Choi further teaches that the robot cleaner being able to determine whether or not an obstacle can be moved, and moving it as being advantageous, one of ordinary skill in the art would have reasonably concluded that the combination of these two known features would be advantageous for developing a robot cleaning system. This satisfies the requirement that there be suggestion or motivation to modify the reference or to combine the teachings per MPEP 2142. Further, given that both Sinyavskiy and Choi teach methods of performing the claimed functions in the paragraphs cited above, one of ordinary skill in the art could reasonably expect success of the combination given that both methods are explicitly taught and the devices both are created to clean and autonomously navigate an environment. This satisfies requirement that the principle of operation must not be modified, and that that the modification must not render the prior art unsatisfactory for completing its intended purposes given that both prior art system have the same intended purpose, but slightly different methods of achieving the same result (MPEP 2143.01). Therefore, for at least the reasons above, the examiner is maintaining the rejections made under 35 U.S.C. 103.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
External device in claims 3 and 13
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-8, 10-19 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Sinyavskiy (WO 2018144846 A1) in view of Choi (US 10852729 B2).
Regarding claim 1 Sinyavskiy discloses: A method performed by a robot cleaner (Sinyavskiy, [0012] the robot is a floor cleaning robot
based on spatial information about a space including at least one object and a task that the robot cleaner is set to perform (Sinyavskiy, [0045] the robot can generate a map to autonomously navigate an environment, [0059] the robot can navigate around the object based its known position),
selecting an object that obstructs the task from among objects located in a space corresponding to the task (Sinyavskiy, [0059] the robot can navigate around the object based its known position [0124] the sensor can detect an object obstructing the robot’s path);
[providing a user as a guide for moving the object, the object movement guide information corresponding to attribute information about the the attribute information including a mobility level of the object;]
PNG
media_image1.png
896
682
media_image1.png
Greyscale
(Figure 6a)
determining a movement path used to perform the task based on a user's response corresponding to the object movement guide information (Sinyavskiy, [0100] the path can be updated based upon whether the assist request (obstacle detected in path and corresponding request for it be removed) is resolved);
and driving the robot cleaner according to the (Sinyavskiy, [0100] after the obstacle has been resolved the user has pushed a button, the robot can proceed with operation based on a determined path).
Sinyavskiy does not teach; providing a user as a guide for moving the object, the object movement guide information corresponding to attribute information about the the attribute information including a mobility level of the object;
However, in the same field of endeavor, Choi teaches; providing a user as a guide for moving the object, the object movement guide information corresponding to attribute information about the the attribute information including a mobility level of the object (Choi, Column 14, lines 7-15, an obstacle in front of the robot cleaner is detected, column 18, lines 10-16, the attributes of an obstacle can be determined based on learned data, Column 20 lines 63 - column 21 line 51 the obstacle’s attributes may be selected, and then the robot may determine a set of potential new locations to move the obstacle/object, present these to a user, where the user may select their preferred location to move the object, the robot is able to determine if an obstacle is able to be moved or able to move (mobility level));
The combination of Sinyavskiy and Choi would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Sinyavskiy teaches a robot cleaner with the capacity to determine a map of the area it is cleaning, and determine an obstacle in its path. Choi teaches a robot cleaner with the capacity to store obstacle details, and determine that the obstacle can be moved or not, and where to move the obstacle. The addition of this feature of Choi to the system of Sinyavskiy would be advantageous in situations where an autonomous cleaner encounters and obstacle, and is either trapped or unable to move on its path due being blocked. In these situations, being able to request instructions from a user on where to move an object after determining whether something can be moved would allow the robot to move the obstacle and continue its tasks. (Choi, column 14, 18 and 20-21)
Regarding claim 2 the combination of Sinyavskiy and Choi teaches: The method of claim 1, wherein the selecting of the object obstructing the task comprises:
obtaining a spatial map as the spatial information (Sinyavskiy, [0045] the robot can generate a map to autonomously navigate an environment);
analyzing a prediction about processing of the task by using the (Sinyavskiy, [0011] the processor is configured to learn and identify obstacles and monitor the robot through processes such as task navigation, [0007]- [0008] the robot is performing a task in an environment and the system collects data about the robot performing the task);
and determining at least one object obstructing the task based on a result of the analyzing (Sinyavskiy, [0011] the system uses machine learning to determine an obstacle or object obstructing the task and identify it).
Regarding claim 3 the combination of Sinyavskiy and Choi teaches: The method of claim 2, wherein the obtaining of the spatial map comprises obtaining the spatial map based on at least one of a first spatial map stored in the robot cleaner (Sinyavskiy, [0012] the robot is a floor cleaning robot ,[0045] the robot can generate a map to autonomously navigate an environment, the map is generated using sensors or sensor data, [0065] the data can be stored on a memory) or a second spatial map received from an external device in communication with the robot cleaner.
Regarding claim 4 the combination of Sinyavskiy and Choi teaches: The method of claim 2, wherein the analyzing of the prediction comprises comparing and analyzing results of predictions about the processing of the task for a plurality of movement paths that are distinguished from each other based on a position of at least one object and at least one branch point on a virtual movement path of the robot cleaner for performing the task (Sinyavskiy, [0012] the robot is a floor cleaning robot, [0120] the robots paths can be predicted through model training as well as based upon the desired task the robot will be performing, [0078]-[0079] the robot can determine if the path is blocked by an obstacle (position of an object), [0104] the robot can determine where to move to in order to clear the obstacle (branch point, where the robots path deviates from the planned path)).
Regarding claim 5 the combination of Sinyavskiy and Choi teaches: The method of claim 1, wherein the providing of the object movement guide information comprises:
Identifying the attribute information about the (Sinyavskiy, [0011] the object/obstacle is identified, [0081] robot can recognize people, animals or objects detected using the sensor, the ability to identify and differentiate obstacles indicates the ability to detect distinguishing features or attributes);
and providing the object movement guide information to the user by executing a movement request process corresponding to the (Sinyavskiy, [0090] an object is detected in the path of the robot, the robot then sends an assist request to the user so the user can go assist the robot by moving the object, [0100] the assist request/event can display an instruction to the user to move the obstacle).
Regarding claim 6 the combination of Sinyavskiy and Choi teaches: The method of claim 5, wherein the providing of the object movement guide information comprises transmitting, to a user terminal of the user, an analysis result obtained by analyzing a prediction about processing (Sinyavskiy, [0090] an object is detected in the path of the robot, the robot then sends an assist request to the user so the user can go assist the robot by moving the object (sending a result/information to the user terminal), [0100] the assist request/event can display an instruction to the user to move the obstacle, the obstacle must be detected as being in the path of the robot while is completing a task such that is interferes with the task (analogous to information about the processing of the task), following this the user receives a notification telling them to move the object (movement guide)).
Regarding claim 7 the combination of Sinyavskiy and Choi teaches; The method of claim 5, wherein the providing of the object movement guide information comprises:
selecting, on a three-dimensional (3D) spatial map of a region where the (Choi, Column 19, lines 40-50, the robot detects the location of obstacles which can be moved, Column 20, lines 28-41, the robot can provide the user a map in which positions where the obstacles can be moved are displayed for the user to select);
obtaining, for each candidate position, an image showing a state in which the (Choi, Figure 28 shows candidate positions for the obstacle/object to be moved on a map, Column 20 lines 28-49 the robot will move the object to the selected candidate position and the map displays this information on the interface for the user);
PNG
media_image2.png
618
446
media_image2.png
Greyscale
(Choi, Figure 28)
evaluating the into the image evaluation model (Choi, Column 21, lines 52-63, the robot stores and obtains object/obstacle position and movement information based upon images, which is analogous to evaluating an image using an image recognition model);
and transmitting, to a user terminal of the user, the object movement guide information according to a result of the evaluating of the (Choi, Figure 28 shows candidate positions for the obstacle/object to be moved on a map, Column 20 lines 28-49 the robot will move the object to the selected candidate position and the map displays this information on the interface for the user, the positions are determined based upon images).
The combination of Sinyavskiy and Choi would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Sinyavskiy teaches a robot cleaner which uses machine learning to detect obstacles and signal the user to assist with moving an obstacle so the device can clean. The system of Choi teaches a robot cleaner which is able to detect obstacles, determine new locations for them and move the obstacles or have the user move them. The combination of the system of Sinyavskiy with the system of Choi would allow a robot cleaner to prompt the relocation of obstacles and provide a set of candidate positions for more effective cleaning of a space because the system can determine a more ideal location for an object such that it is out of the way. (Choi, Columns 19 and 20)
Regarding claim 8 the combination of Sinyavskiy and Choi teaches: The method of claim 1, wherein the determining of the movement path comprises:
identifying a moved object among (Sinyavskiy, [0100] the user presses a button in response to the “assist event” request to move an obstacle once the obstacle has been moved or the robot has been moved, after this the robot can re-localize itself to its surroundings to verify the event (obstacle/object detected) has been resolved.);
and obtaining movement path reflecting the moved object in the space corresponding to the task (Sinyavskiy, [0100], when the assist request (request to remove an obstacle) has been resolved the robot can determine a movement path to complete the task).
Regarding claim 10 the combination of Sinyavskiy and Choi teaches: A non-transitory computer-readable recording medium having recorded thereon a program that, when executed by at least one processor of a robot cleaner (Sinyavskiy, [0012] the robot is a floor cleaning robot), cause the robot cleaner to perform operations, the operations comprising (Sinyavskiy, [0017] the system used non-transitory computer readable medium to execute code by a processor):
based on spatial information about a space including at least one object and a task that the robot cleaner is set to perform (Sinyavskiy, [0045] the robot can generate a map to autonomously navigate an environment, [0059] the robot can navigate around the object based its known position),
selecting an object that obstructs the task from among objects located in a space corresponding to the task (Sinyavskiy, [0059] the robot can navigate around the object based its known position [0124] the sensor can detect an object obstructing the robot’s path);
providing a user as a guide for moving the object, the object movement guide information corresponding to attribute information about the selected object, the attribute information including a mobility level of the object (Choi, Column 14, lines 7-15, an obstacle in front of the robot cleaner is detected, column 18, lines 10-16, the attributes of an obstacle can be determined based on learned data, Column 20 lines 63 - column 21 line 51 the obstacle’s attributes may be selected, and then the robot may determine a set of potential new locations to move the obstacle/object, present these to a user, where the user may select their preferred location to move the object, the robot is able to determine if an obstacle is able to be moved or able to move (mobility level));
determining a movement path used to perform the task based on a user's response corresponding to the object movement guide information (Sinyavskiy, [0100] the path can be updated based upon whether the assist request (obstacle detected in path and corresponding request for it be removed) is resolved);
and driving the robot cleaner according to the(Sinyavskiy, [0100] after the obstacle has been resolved the user has pushed a button, the robot can proceed with operation based on a determined path).
The combination of Sinyavskiy and Choi would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Sinyavskiy teaches a robot cleaner with the capacity to determine a map of the area it is cleaning, and determine an obstacle in its path. Choi teaches a robot cleaner with the capacity to store obstacle details, and determine that the obstacle can be moved or not, and where to move the obstacle. The addition of this feature of Choi to the system of Sinyavskiy would be advantageous in situations where an autonomous cleaner encounters and obstacle, and is either trapped or unable to move on its path due being blocked. In these situations, being able to request instructions from a user on where to move an object after determining whether something can be moved would allow the robot to move the obstacle and continue its tasks. (Choi, column 14, 18 and 20-21)
Regarding claim 11 the combination of Sinyavskiy and Choi teaches: A robot cleaner using spatial information, the robot cleaner comprising (Sinyavskiy, [0012] the robot is a floor cleaning robot):
(Sinyavskiy, [0061] the system has a memory);
a sensor (Sinyavskiy, [0007] the system has at least one sensor the robot uses to generate data about the environment); and
one or more processors coupled to the memory and the sensor (Sinyavskiy, [0017] the system used non-transitory computer readable medium to execute code by a processor)
wherein the instructions, when executed by the one or more processors individually or collectively cause the robot cleaner to (Sinyavskiy, [0017] the system used non-transitory computer readable medium to execute code by a processor);
based on spatial information, obtained via the sensor, about a space including at least one objectrobot cleaner is set to perform (Sinyavskiy, [0045] the robot can generate a map to autonomously navigate an environment, [0059] the robot can navigate around the object based its known position),
select an object that obstructs the task from among objects located in a space corresponding to the task (Sinyavskiy, [0059] the robot can navigate around the object based its known position [0124] the sensor can detect an object obstructing the robot’s path);
provide a user as a guide for moving the object, the object movement guide information corresponding to attribute information about the the attribute information including a mobility level of the object (Choi, Column 14, lines 7-15, an obstacle in front of the robot cleaner is detected, column 18, lines 10-16, the attributes of an obstacle can be determined based on learned data, Column 20 lines 63 - column 21 line 51 the obstacle’s attributes may be selected, and then the robot may determine a set of potential new locations to move the obstacle/object, present these to a user, where the user may select their preferred location to move the object, the robot is able to determine if an obstacle is able to be moved or able to move (mobility level));
determine a movement path used to perform the task based on a user's response corresponding to the object movement guide information (Sinyavskiy, [0100] the path can be updated based upon whether the assist request (obstacle detected in path and corresponding request for it be removed) is resolved);
and drive the robot cleaner according to the determined movement path (Sinyavskiy, [0100] after the obstacle has been resolved the user has pushed a button, the robot can proceed with operation based on a determined path).
The combination of Sinyavskiy and Choi would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Sinyavskiy teaches a robot cleaner with the capacity to determine a map of the area it is cleaning, and determine an obstacle in its path. Choi teaches a robot cleaner with the capacity to store obstacle details, and determine that the obstacle can be moved or not, and where to move the obstacle. The addition of this feature of Choi to the system of Sinyavskiy would be advantageous in situations where an autonomous cleaner encounters and obstacle, and is either trapped or unable to move on its path due being blocked. In these situations, being able to request instructions from a user on where to move an object after determining whether something can be moved would allow the robot to move the obstacle and continue its tasks. (Choi, column 14, 18 and 20-21)
Regarding claim 12 the combination of Sinyavskiy and Choi teaches: The robot cleaner of claim 11, wherein the instructions, when executed by the one or more processors individually or collectivelycause the robot cleaner (Sinyavskiy, [0012] the robot is a floor cleaning robot,[0017] the system used non-transitory computer readable medium to execute code by a processor):
obtain a spatial map as the spatial information (Sinyavskiy, [0045] the robot can generate a map to autonomously navigate an environment);
analyze a prediction about processing (Sinyavskiy, [0011] the processor is configured to learn and identify obstacles and monitor the robot through processes such as task navigation, [0007]- [0008] the robot is performing a task in an environment and the system collects data about the robot performing the task);
and determine at least one object obstructing the task, based on a result of the analyzing (Sinyavskiy, [0011] the system uses machine learning to determine an obstacle or object obstructing the task and identify it).
Regarding claim 13 the combination of Sinyavskiy and Choi teaches; The robot cleaner of claim 11, wherein the instructions, when executed by the one or more processors individually or collectivelycause the robot cleaner (Sinyavskiy, [0012] the robot is a floor cleaning robot, [0017] the system used non-transitory computer readable medium to execute code by a processor):
obtain the spatial map based on at least one of a first spatial map stored in the memory (Sinyavskiy, [0045] the robot can generate a map to autonomously navigate an environment, the map is generated using sensors or sensor data, [0065] the data can be stored on a memory) or a second spatial map received from an external device in communication with the robot cleaner.
Regarding claim 14 the combination of Sinyavskiy and Choi teaches; The robot cleaner of claim 11, wherein the instructions, when executed by the one or more processors individually or collectivelycause the robot cleaner (Sinyavskiy, [0012] the robot is a floor cleaning robot, [0017] the system used non-transitory computer readable medium to execute code by a processor):
compare and analyze results of predictions about the processing of the task for a plurality of movement paths that are distinguished from each other based on a position of at least one object and at least one branch point on a virtual movement path of the robot cleaner for performing the task (Sinyavskiy, [0120] the robots paths can be predicted through model training as well as based upon the desired task the robot will be performing, [0078]-[0079] the robot can determine if the path is blocked by an obstacle (position of an object), [0104] the robot can determine where to move to in order to clear the obstacle (branch point, where the robots path deviates from the planned path)).
Regarding claim 15 the combination of Sinyavskiy and Choi teaches; The robot cleaner of claim 12 (Sinyavskiy, [0012] the robot is a floor cleaning robot), wherein the spatial map comprises a plurality of layers based on the attribute information about the object ([0057]-[0058] of applicant’s spec defines the map layers as having a base layer, a semantic layer and a real time layer, which includes position information about objects, semantic classification and structures of the room/building, figures 6A-6C show a map having object positions, alerts, and environmental structure which is analogous to the layers of the applicant’s specification).
Regarding claim 16 the combination of Sinyavskiy and Choi teaches; The robot cleaner of claim 11, wherein the instructions, when executed by the one or more processors individually or collectivelycause the robot cleaner (Sinyavskiy, [0012] the robot is a floor cleaning robot,[0017] the system used non-transitory computer readable medium to execute code by a processor):
Identify the attribute information about the (Sinyavskiy, [0011] the object/obstacle is identified, [0081] robot can recognize people, animals or objects detected using the sensor, the ability to identify and differentiate obstacles indicates the ability to detect distinguishing features or attributes);
and provide the object movement guide information to the user by executing a movement request process corresponding to the (Sinyavskiy, [0090] an object is detected in the path of the robot, the robot then sends an assist request to the user so the user can go assist the robot by moving the object, [0100] the assist request/event can display an instruction to the user to move the obstacle).
Regarding claim 17 the combination of Sinyavskiy and Choi teaches; The robot cleaner of claim 16, further comprising:
a communication interface, wherein the instructions, when executed by the one or more processors individually or collectively, cause the robot cleaner via the communication interface to a user terminal of the user (Sinyavskiy, [0012] the robot is a floor cleaning robot,[0017] the system used non-transitory computer readable medium to execute code by a processor), (Sinyavskiy, [0090] an object is detected in the path of the robot, the robot then sends an assist request to the user so the user can go assist the robot by moving the object (sending a result/information to the user terminal), [0100] the assist request/event can display an instruction to the user to move the obstacle, the obstacle must be detected as being in the path of the robot while is completing a task such that is interferes with the task (analogous to information about the processing of the task), following this the user receives a notification telling them to move the object (movement guide)).
Regarding claim 18 the combination of Sinyavskiy and Choi teaches; The robot cleaner of claim 16, further comprising:
a communication interface, wherein the instructions, when executed by the one or more processors individually or collectively, cause the robot cleaner (Sinyavskiy, [0012] the robot is a floor cleaning robot, [0017] the system used non-transitory computer readable medium to execute code by a processor):
select, on a three-dimensional (3D) spatial map of a region where the (Choi, Column 19, lines 40-50, the robot detects the location of obstacles which can be moved, Column 20, lines 28-41, the robot can provide the user a map in which positions where the obstacles can be moved are displayed for the user to select);
obtain, for each candidate position, an image showing a state in which the (Choi, Figure 28 shows candidate positions for the obstacle/object to be moved on a map, Column 20 lines 28-49 the robot will move the object to the selected candidate position and the map displays this information on the interface for the user);
PNG
media_image2.png
618
446
media_image2.png
Greyscale
(Choi, Figure 28)
evaluate the into the image evaluation model (Choi, Column 21, lines 52-63, the robot stores and obtains object/obstacle position and movement information based upon images, which is analogous to evaluating an image using an image recognition model);
and transmit, to a user terminal of the user, the object movement guide information according to a result of the evaluating of the (Choi, Figure 28 shows candidate positions for the obstacle/object to be moved on a map, Column 20 lines 28-49 the robot will move the object to the selected candidate position and the map displays this information on the interface for the user, the positions are determined based upon images).
The combination of Sinyavskiy and Choi would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The system of Sinyavskiy teaches a robot cleaner which uses machine learning to detect obstacles and signal the user to assist with moving an obstacle so the device can clean. The system of Choi teaches a robot cleaner which is able to detect obstacles, determine new locations for them and move the obstacles or have the user move them. The combination of the system of Sinyavskiy with the system of Choi would allow a robot cleaner to prompt the relocation of obstacles and provide a set of candidate positions for more effective cleaning of a space because the system can determine a more ideal location for an object such that it is out of the way. (Choi, Columns 19 and 20)
Regarding claim 19 the combination of Sinyavskiy and Choi teaches; The robot cleaner of claim 11, wherein the instructions, when executed by the one or more processors individually or collectively
identify a moved object among (Sinyavskiy, [0100] the user presses a button in response to the “assist event” request to move an obstacle once the obstacle has been moved or the robot has been moved, after this the robot can re-localize itself to its surroundings to verify the event (obstacle/object detected) has been resolved.);
and obtain a movement path reflecting the moved object in the space corresponding to the task (Sinyavskiy, [0100], when the assist request (request to remove an obstacle) has been resolved the robot can determine a movement path to complete the task).
Regarding claim 21, the combination of Sinyavskiy and Choi teaches; The method claim 1,
wherein the mobility level of the object is determined by applying objective characteristics of the object to a predetermined classification criterion for evaluating mobility (Choi, column 21, line 10-15, the object can be determined to be able to be moved or not able to be moved, further, claim 1, the obstacles attributes are determined where the obstacle is classified as movable or not),
and wherein the mobility level is determined among a plurality of mobility levels in which a lowest mobility level among the plurality of mobility levels corresponds to an immovable object and a highest mobility level among the plurality of mobility levels corresponds to a frequently moved movable object (Choi, column 21, line 10-15, the object can be determined to be able to be moved or not able to be moved, further, claim 1, the obstacles attributes are determined where the obstacle is classified as movable or not, figure 35 shows that the obstacle is determined as one of two levels, where it is either movable or not).
The combination of Sinyavskiy and Choi would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. Sinyavskiy teaches a robot cleaner with the capacity to determine a map of the area it is cleaning, and determine an obstacle in its path. Choi teaches a robot cleaner with the capacity to store obstacle details, and determine that the obstacle can be moved or not, and where to move the obstacle. The addition of this feature of Choi to the system of Sinyavskiy would be advantageous in situations where an autonomous cleaner encounters and obstacle, and is either trapped or unable to move on its path due being blocked. In these situations, being able to request instructions from a user on where to move an object after determining whether something can be moved would allow the robot to move the obstacle and continue its tasks. (Choi, column 14, 18 and 20-21)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For a listing of analogous art as cited by the examiner please see the attached PTo-892 Notice of References Cited form.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666