DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 27-33, 36-38, 41-50, 52-54 are rejected under 35 U.S.C. 103 as being unpatentable over Groz (US 20210031364 A1) in view of Assaf (US 20180165518 A1)
Regarding claim 1, Groz teaches A robot, comprising: (Fig. 1 robot 110)
one or more memories; and (Fig. 2 memory 220)
one or more processors, wherein (Fig. 2 processor 210)
the one or more processors are configured to
transmit a notification relating to a task of the robot to an external device, based on recognizing a specific situation in an execution of the task; ([0049] The teleoperator 160 may include a person that provides operator assistance to the robot 110 using the personal computing device 150. In some embodiments, the personal computing device 150 may include a PC or a handheld microprocessor device, such as a smartphone or a tablet computer [0066] Based on the received task, the robot 110 may start execution of the task. When a processor of the robot 110 determines that the probability of completing the task is below the threshold, the robot 110 sends a request for operator assistance to the personal computing device 150.)
and control an autonomous movement of the robot based on the received information. ([0066] The teleoperator 160 may receive the request from the robot 110 and either manually operate the robot 110 or find mistakes in the operation of the robot and send correct teleoperation data to the robot 110)
Groz does not expressly disclose but Assaf discloses receive, from the external device, information relating to the transmitted notification, wherein the information concerns a movable area of the robot, and the information is obtained based on a review, by a person using the external device having received the transmitted notification, of the recognized specific situation, the review including an instruction, which is performed without the person specifying a position and a direction of movement of the robot; ([0041] After navigating about the first location 102 and identifying objects located in the first location 102, the robot 120 can navigate to a second location 104 of the area 102. For example, the robot 120 may determine to navigate to the second location 104 in response to determining that the robot 120 has identified or at least attempted to identify each object in the first location 102 of which the robot 120 detected the presence. In some implementations, the robot 120 self-navigates to explore multiple locations 102, 104, 106, 108 in an area, such as a house. As it travels, the robot 120 determines what objects can be automatically identified and which cannot. The robot 120 can continue to explore and move from area to area, adding to the list of unidentified objects as it explores, without waiting for human feedback using the mobile tool. At any time, perhaps during exploration or after the robot's exploration is complete, the user can use the mobile tool to provide the identity of items that the robot could not identify) the instruction being performed by instructing the robot to recognize, as the movable area, an area that has not been recognized as the movable area by the robot in the recognized specific situation; ([0091] The robot navigates about an area (602). For example, a data processing apparatus of the robot may navigate the robot about the area without human control of the navigation. The data processing apparatus may navigate the robot about the area in response to a user command to identify objects in the area. For example, the command may be to operate in an explore mode of operation to become familiar with objects and locations in the area. While navigating through the area, the robot can use one or more sensors to detect the presence of objects located in the area. For example, the robot may include a camera that captures images and or video of the area while the robot is navigating. [0092] The robot generates a log of objects located in the area that the robot does not recognize (604). The robot may send data about the objects in the log to a mobile tool that presents a user interface that allows the user to identify each object in the log. For example, the interface may be the same as, or similar to, the user interface 300 of FIG. 3.)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to modify Groz with the teachings of Assaf with a reasonable expectation of success by generating a log of objects that a robot does not recognize and receiving data identifying the objects as taught by Assaf ([0090]).
Regarding claim 27, Groz teaches The robot according to claim 1, wherein the review by the person is based on an image taken by the robot in the specific situation. ([0059] The remote control interface 240 may provide a telecommunication service for the remote control of the robot 110. A teleoperator may use the remote control interface 240 to review sensor data collected by sensors 250, such as sensor readings and images taken by cameras, and may manually create commands for the robot 110 to solve the task.)
Regarding claim 28, Groz teaches The robot according to claim 1, wherein the one or more processors are configured to transmit, to the external device, the notification including an image taken by the robot in the specific situation. ([0079] FIG. 5 is a block diagram showing a task 500 for focusing on an object, according to an example embodiment. A teleoperator may request a robot to bring a bottle from a refrigerator. First, the robot may be trained to differentiate objects. When the robot is located near the refrigerator and a camera of the robot is directed to all objects present in the refrigerator, the teleoperator may use an image or a video captured by the camera to label each of the objects. For example, the teleoperator may select each object on the image or the video and type or pronounce a label of the object, such as “This is a bottle.” The association between the appearance of the objects and their labels may be stored to the ANNs. Labeling the objects may be a part of training the robot to differentiate objects)
Regarding claim 29, Groz teaches The robot according to claim 28,wherein the image is onto which information on a detail of the recognizing of the specific situation is superimposed. ([0079] FIG. 5 is a block diagram showing a task 500 for focusing on an object, according to an example embodiment. A teleoperator may request a robot to bring a bottle from a refrigerator. First, the robot may be trained to differentiate objects. When the robot is located near the refrigerator and a camera of the robot is directed to all objects present in the refrigerator, the teleoperator may use an image or a video captured by the camera to label each of the objects. For example, the teleoperator may select each object on the image or the video and type or pronounce a label of the object, such as “This is a bottle.” The association between the appearance of the objects and their labels may be stored to the ANNs. Labeling the objects may be a part of training the robot to differentiate objects)
Regarding claim 30, Groz teaches The robot according to claim 1, wherein the information is not an instruction with which the person directly and remotely controls the robot to move in the specific situation. ([0050] “Hey, robot. What's the weather like today?” The robot 110 may interact with the teleoperator 160 and may provide responses to the teleoperator 160, like “Today the temperature is 25 degrees C.,” )
Regarding claim 31, Groz teaches The robot according to claim 1, wherein the information is obtained from the person to instruct that an area recognized by the robot as not movable is movable. ([0067] The robot 110 may also send the request for operator assistance when there is a timeout in the execution of the task, if a predetermined object (e.g., a child) is detected in the working environment, if an object is damaged during the execution of the task, if a command cannot be performed for a predetermined period of time or predetermined number of times, and so forth. [0068] In response to sending the request for operator assistance, the processor of the robot 110 may receive the teleoperation data from the personal computing device 150 of the teleoperator and cause the robot to execute the task based on the teleoperation data and the sensor data.)
Regarding claim 32, Groz teaches The robot according to claim 1, wherein the information modifies information on an area recognized by the robot as movable. ([0065] The robot 110 may interact dynamically with a working environment of the robot 110. If the working environment changes, the robot 110 can use the AI Skills/ANNs 230 to dynamically adapt to the changed working environment. The determination that the working environment has changed may be made based on the sensor data received during the operation of the robot 110.)
Regarding claim 33, Groz teaches The robot according to claim 1, wherein the information is for correcting a recognition error that has caused the recognition of the specific situation. ([0070] During the training process, continuous learning, machine learning, and user interface (UI) learning may be used. Errors may be identified and corrected for the future so that in the future the probability of the same error may be lower for a similar situation/task.).
Regarding claim 36, Groz teaches The robot according to claim 1, wherein the one or more processors are configured to modify a movable area recognized by the robot based on the received information, and cause the robot to autonomously move within the modified movable area. ([0085] The method 700 may further include causing the robot to execute the task based on the teleoperation data at operation 725. In an example embodiment, the task may include locating, by the AI model and based on an image received from the plurality of sensors, an object in a working environment of the robot. The teleoperation data may include a clue indicative of the location of the object in the image. In a further example embodiment, the task may include determining, based on the sensor data, a direction to an object with respect to the robot and a distance from the robot to the object in a working environment of the robot. In this embodiment, the teleoperation data may include information regarding the direction to the object with respect to the robot and the distance from the robot to the object. In some example embodiments, the task may include grasping, based on the sensor data, by a manipulator of the robot, an object in a working environment of the robot. In this embodiment, the teleoperation data may include information regarding commands for one or more actuators of the manipulator.)
Regarding claim 37, Groz teaches The robot according to claim 1, wherein the one or more processors are configured to use a trained model to recognize an environment in which the robot moves autonomously, and the trained model is trained by a neural network using the received information. ([0083] the AI model may include a trained neural network. The determination that the probability of completing the task is below the threshold may include determining that a time for completing the task exceeds a pre-determined value. The probability of completing the task may be determined based on a distribution of levels of outputs of the trained neural network.)
Regarding claim 38, Groz teaches The robot according to claim 37, wherein the training by using the received information is performed in another external device. ([0072] The system 400 may have virtual AI Skills/ANNs 420 stored in the cloud computing resources 170. The virtual AI Skills/ANNs 420 may be used to perform virtual simulation 410 of tasks. During the virtual simulation 410, the execution of tasks by the robot 110 is simulated in the cloud computing resources 170 using the virtual AI Skills/ANNs 420. The results of the virtual simulation 410 may be added to the training data. The AI model used for execution of the tasks by the robot 110 may be updated based on the training data obtained upon the virtual simulation 410 of execution of tasks. The updated AI model may be used for operation and training of physical robots in a real-world environment.)
Regarding claim 41, Groz teaches A device collecting a plurality of pieces of information regarding predetermined notifications transmitted from a plurality of robots, each robot according to claim 1, and updating an environmental recognition capability of at least one of the plurality of robots using the collected plurality of pieces of information. ([0002] This disclosure relates generally to training robots and, more specifically, to backup control based continuous training of robots [0016] According to another embodiment, a method for training of robots is provided. An example method may commence with collecting, by a processor of the robot, sensor data from a plurality of sensors of the robot. The sensor data may be related to a task being performed by the robot based on an AI model.)
Regarding claim 42, Groz teaches The device according to claim 41, wherein the device uses the collected plurality of pieces of information to update a trained model for performing an environmental recognition by the at least one of the plurality of robots, and updates abilities for the environmental recognition by the at least one of the plurality of the robots. ([0077] In an example embodiment, only the local version of the AI Skills/ANNs 230 of the robot 110 can be trained. In a further example embodiment, a global version of AI Skills/ANNs stored in a cloud can be trained and the local version of the AI Skills/ANNs 230 can be then synchronized with the global version of the AI Skills/ANNs.)
Regarding claim 43, Groz teaches A device communicating with a robot that autonomously navigates through an environment to perform a task and recognizes the environment in an execution of the task, the device comprising: (Fig. 1 robot 110, device 150)
one or more memories; and (Fig. 2 memory 220)
one or more processors configured to: (Fig. 2 processor 210)
receive, from the robot that recognizes a specific situation of the environment, a notification relating to the recognized specific situation; ([0066] Based on the received task, the robot 110 may start execution of the task. When a processor of the robot 110 determines that the probability of completing the task is below the threshold, the robot 110 sends a request for operator assistance to the personal computing device 150.)
transmit, to a person who operates the device, the notification relating to the recognized specific situation; and ([0049] The teleoperator 160 may include a person that provides operator assistance to the robot 110 using the personal computing device 150. In some embodiments, the personal computing device 150 may include a PC or a handheld microprocessor device, such as a smartphone or a tablet computer)
receive information input by the person relating to the specific situation in response to the notification relating to the recognized specific situation; wherein ([0050] In an example embodiment, the robot 110 may be configured to receive tasks from the teleoperator 160 in natural language)
and the information is used by the robot having sent the notification for an autonomous movement when performing a specific task. ([0066] The teleoperator 160 may receive the request from the robot 110 and either manually operate the robot 110 or find mistakes in the operation of the robot and send correct teleoperation data to the robot 110)
Groz does not expressly disclose but Assaf discloses the information concerns a movable area of the robot, the information is obtained based on a review, by the person using the device having received the notification relating to the recognized specific situation, of the recognized specific situation the review including an instruction, which is performed without the person specifying a position and a direction of movement of the robot([0041] After navigating about the first location 102 and identifying objects located in the first location 102, the robot 120 can navigate to a second location 104 of the area 102. For example, the robot 120 may determine to navigate to the second location 104 in response to determining that the robot 120 has identified or at least attempted to identify each object in the first location 102 of which the robot 120 detected the presence. In some implementations, the robot 120 self-navigates to explore multiple locations 102, 104, 106, 108 in an area, such as a house. As it travels, the robot 120 determines what objects can be automatically identified and which cannot. The robot 120 can continue to explore and move from area to area, adding to the list of unidentified objects as it explores, without waiting for human feedback using the mobile tool. At any time, perhaps during exploration or after the robot's exploration is complete, the user can use the mobile tool to provide the identity of items that the robot could not identify), the instruction being performed by instructing the robot to recognize, as the movable area, an area that has not been recognized as the movable area by the robot in the recognized specific situation, ([0091] The robot navigates about an area (602). For example, a data processing apparatus of the robot may navigate the robot about the area without human control of the navigation. The data processing apparatus may navigate the robot about the area in response to a user command to identify objects in the area. For example, the command may be to operate in an explore mode of operation to become familiar with objects and locations in the area. While navigating through the area, the robot can use one or more sensors to detect the presence of objects located in the area. For example, the robot may include a camera that captures images and or video of the area while the robot is navigating. [0092] The robot generates a log of objects located in the area that the robot does not recognize (604). The robot may send data about the objects in the log to a mobile tool that presents a user interface that allows the user to identify each object in the log. For example, the interface may be the same as, or similar to, the user interface 300 of FIG. 3.)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to modify Groz with the teachings of Assaf with a reasonable expectation of success by generating a log of objects that a robot does not recognize and receiving data identifying the objects as taught by Assaf ([0090]).
Regarding claim 44, Groz teaches The device according to claim 43, wherein the information is obtained based on an instruction of the person based on an image taken by the robot in the specific situation. ([0059] The remote control interface 240 may provide a telecommunication service for the remote control of the robot 110. A teleoperator may use the remote control interface 240 to review sensor data collected by sensors 250, such as sensor readings and images taken by cameras, and may manually create commands for the robot 110 to solve the task.)
Regarding claim 45, Groz teaches The device according to claim 43, wherein the one or more processor are configured to:
receive the notification including an image of the recognized specific situation taken by the robot;
display the image to the person; and
receive the information based on the displayed image. ([0079] FIG. 5 is a block diagram showing a task 500 for focusing on an object, according to an example embodiment. A teleoperator may request a robot to bring a bottle from a refrigerator. First, the robot may be trained to differentiate objects. When the robot is located near the refrigerator and a camera of the robot is directed to all objects present in the refrigerator, the teleoperator may use an image or a video captured by the camera to label each of the objects. For example, the teleoperator may select each object on the image or the video and type or pronounce a label of the object, such as “This is a bottle.” The association between the appearance of the objects and their labels may be stored to the ANNs. Labeling the objects may be a part of training the robot to differentiate objects)
Regarding claim 46, Groz teaches The device according to claim 43, wherein the information is used for controlling an autonomous movement of the robot with respect to the specific situation. ([0066] The teleoperator 160 may receive the request from the robot 110 and either manually operate the robot 110 or find mistakes in the operation of the robot and send correct teleoperation data to the robot 110)
Regarding claim 47, Groz teaches The device according to claim 43, wherein the information is not an instruction with which the person directly and remotely controls the robot to move in the specific situation. ([0050] “Hey, robot. What's the weather like today?” The robot 110 may interact with the teleoperator 160 and may provide responses to the teleoperator 160, like “Today the temperature is 25 degrees C.,” )
Regarding claim 48, Groz teaches The device according to claim 43, wherein the information is obtained from the person to instruct that an area recognized by the robot as not movable is movable. ([0067] The robot 110 may also send the request for operator assistance when there is a timeout in the execution of the task, if a predetermined object (e.g., a child) is detected in the working environment, if an object is damaged during the execution of the task, if a command cannot be performed for a predetermined period of time or predetermined number of times, and so forth. [0068] In response to sending the request for operator assistance, the processor of the robot 110 may receive the teleoperation data from the personal computing device 150 of the teleoperator and cause the robot to execute the task based on the teleoperation data and the sensor data.)
Regarding claim 49, Groz teaches The device according to claim 43, wherein the information modifies information on an area recognized by the robot as movable. ([0065] The robot 110 may interact dynamically with a working environment of the robot 110. If the working environment changes, the robot 110 can use the AI Skills/ANNs 230 to dynamically adapt to the changed working environment. The determination that the working environment has changed may be made based on the sensor data received during the operation of the robot 110.)
Regarding claim 50, Groz teaches The device according to claim 43, wherein the information is for correcting a recognition error that has caused the recognition of the specific situation. ([0070] During the training process, continuous learning, machine learning, and user interface (UI) learning may be used. Errors may be identified and corrected for the future so that in the future the probability of the same error may be lower for a similar situation/task.)
Regarding claim 52, Groz teaches The device according to claim 43, wherein the device is a smartphone. ([0049] The teleoperator 160 may include a person that provides operator assistance to the robot 110 using the personal computing device 150. In some embodiments, the personal computing device 150 may include a PC or a handheld microprocessor device, such as a smartphone or a tablet computer)
Regarding claim 53, Groz teaches A non-transitory computer readable media storing program that causes a computer to perform as the device according to claim 43. (claim 20 A non-transitory computer-readable storage medium having embodied thereon instructions, which when executed by one or more processors, perform a method for training a robot, the method comprising)
Regarding claim 54, Groz teaches A method executed in a system including a robot and an external device, comprising: (Fig. 1 robot 110, device 150)
recognizing, by the robot, a specific situation in an execution of a task of the robot; ( [0066] Based on the received task, the robot 110 may start execution of the task. When a processor of the robot 110 determines that the probability of completing the task is below the threshold, the robot 110 sends a request for operator assistance to the personal computing device 150.)
transmitting, by the robot based on the recognizing of the specific situation, a notification relating to the task of the robot to the external device; ([0049] The teleoperator 160 may include a person that provides operator assistance to the robot 110 using the personal computing device 150. In some embodiments, the personal computing device 150 may include a PC or a handheld microprocessor device, such as a smartphone or a tablet computer [0066] Based on the received task, the robot 110 may start execution of the task. When a processor of the robot 110 determines that the probability of completing the task is below the threshold, the robot 110 sends a request for operator assistance to the personal computing device 150.)
receiving, by the external device, the notification; (Fig. 4 personal computing device 150)
transmitting, by the external device to a person who operates the external device, the notification with regard to the specific situation; ([0049] The teleoperator 160 may include a person that provides operator assistance to the robot 110 using the personal computing device 150. In some embodiments, the personal computing device 150 may include a PC or a handheld microprocessor device, such as a smartphone or a tablet computer)
receiving, by the external device, information input by the person relating to the specific situation;(Fig. 1 robot 110 device 150)
transmitting, by the external device, the information to the robot; ([0049] The teleoperator 160 may include a person that provides operator assistance to the robot 110 using the personal computing device 150. In some embodiments, the personal computing device 150 may include a PC or a handheld microprocessor device, such as a smartphone or a tablet computer [0066] Based on the received task, the robot 110 may start execution of the task. When a processor of the robot 110 determines that the probability of completing the task is below the threshold, the robot 110 sends a request for operator assistance to the personal computing device 150.)
receiving, by the robot from the external device, the information relating to the transmitted notification; and([0049] The teleoperator 160 may include a person that provides operator assistance to the robot 110 using the personal computing device 150. In some embodiments, the personal computing device 150 may include a PC or a handheld microprocessor device, such as a smartphone or a tablet computer [0066] Based on the received task, the robot 110 may start execution of the task. When a processor of the robot 110 determines that the probability of completing the task is below the threshold, the robot 110 sends a request for operator assistance to the personal computing device 150.)
controlling, by the robot based on the received information, an autonomous movement of the robot, wherein ([0066] The teleoperator 160 may receive the request from the robot 110 and either manually operate the robot 110 or find mistakes in the operation of the robot and send correct teleoperation data to the robot 110)
and the information is for resolving the specific situation that has been recognized.([0085] The method 700 may further include causing the robot to execute the task based on the teleoperation data at operation 725)
Groz does not expressly disclose but Assaf discloses the information concerns a movable area of the robot, the information is obtained based on a review, by the person using the external device having received the notification, of the recognized specific situation the review including an instruction, which is performed without the person specifying a position and a direction of movement of the robot, ([0041] After navigating about the first location 102 and identifying objects located in the first location 102, the robot 120 can navigate to a second location 104 of the area 102. For example, the robot 120 may determine to navigate to the second location 104 in response to determining that the robot 120 has identified or at least attempted to identify each object in the first location 102 of which the robot 120 detected the presence. In some implementations, the robot 120 self-navigates to explore multiple locations 102, 104, 106, 108 in an area, such as a house. As it travels, the robot 120 determines what objects can be automatically identified and which cannot. The robot 120 can continue to explore and move from area to area, adding to the list of unidentified objects as it explores, without waiting for human feedback using the mobile tool. At any time, perhaps during exploration or after the robot's exploration is complete, the user can use the mobile tool to provide the identity of items that the robot could not identify) the instruction being performed by instructing the robot to recognize, as the movable area, an area that has not been recognized as the movable area by the robot in the recognized specific situation, ([0091] The robot navigates about an area (602). For example, a data processing apparatus of the robot may navigate the robot about the area without human control of the navigation. The data processing apparatus may navigate the robot about the area in response to a user command to identify objects in the area. For example, the command may be to operate in an explore mode of operation to become familiar with objects and locations in the area. While navigating through the area, the robot can use one or more sensors to detect the presence of objects located in the area. For example, the robot may include a camera that captures images and or video of the area while the robot is navigating. [0092] The robot generates a log of objects located in the area that the robot does not recognize (604). The robot may send data about the objects in the log to a mobile tool that presents a user interface that allows the user to identify each object in the log. For example, the interface may be the same as, or similar to, the user interface 300 of FIG. 3.)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to modify Groz with the teachings of Assaf with a reasonable expectation of success by generating a log of objects that a robot does not recognize and receiving data identifying the objects as taught by Assaf ([0090]).
Claims 34-35, 40 are rejected under 35 U.S.C. 103 as being unpatentable over Groz (US 20210031364 A1) in view of Assaf (US 20180165518 A1) in further view of Hong (US 20200205629 A1)
Regarding claim 34, Groz does not expressly disclose but Hong discloses The robot according to claim 1, wherein the one or more processors are configured to recognize the specific situation in the execution of the task by recognizing an impediment to a movement of the robot using one or more sensors. ([0007] Accordingly, an aspect of the disclosure is to provide an apparatus and method for a cleaning robot performing an optimum task based on an object in a vicinity of the cleaning robot. More particularly, provided are a cleaning robot identifying an object in the vicinity of the cleaning robot and performing an optimum task based on an attribute of the identified object, )
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to modify Groz with the teachings of Hong with a reasonable expectation of success by performing an optimum task based on an object in a vicinity of the cleaning robot as taught by Hong ([0007]).
Regarding claim 35, Groz teaches The robot according to claim 34, wherein the impediment to the movement of the robot is recognized based on the movable area of the robot estimated based on data acquired by the one or more sensors. ([0065] The robot 110 may interact dynamically with a working environment of the robot 110. If the working environment changes, the robot 110 can use the AI Skills/ANNs 230 to dynamically adapt to the changed working environment. The determination that the working environment has changed may be made based on the sensor data received during the operation of the robot 110.)
Regarding claim 40, Groz does not expressly disclose but Hong discloses The robot according to claim 1, wherein the robot is one of an industrial robot, a cleaning robot, an android, a pet robot, or an automatic transport device. ([0009] a method, performed by a cleaning robot, of performing a task is provided)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to modify Groz with the teachings of Hong with a reasonable expectation of success by performing an optimum task based on an object in a vicinity of the cleaning robot as taught by Hong ([0007]).
Claims 39, 51 are rejected under 35 U.S.C. 103 as being unpatentable over Groz (US 20210031364 A1) in view of Assaf (US 20180165518 A1) in further view of Takemura (US 20180257245 A1)
Regarding claim 39, Groz does not expressly disclose but Takemura discloses The robot according to claim 1, wherein the autonomous movement of the robot is a movement on a floor. (fig. 1 robot with legs)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to modify Groz with the teachings of Takemura with a reasonable expectation of success by causing the robot to walk stably as taught by Takemura ([0025]).
Regarding claim 51, Groz does not expressly disclose but Takemura discloses The device according to claim 43, wherein the autonomous movement of the robot is a movement on a floor. (fig. 1 robot with legs)
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention to modify Groz with the teachings of Takemura with a reasonable expectation of success by causing the robot to walk stably as taught by Takemura ([0025]).
Response to Arguments
Applicants arguments filed 6/24/2025 have been fully considered as follows:
Applicant argues that the 35 USC 112 rejections to the claims should not be maintained in view of “claim 39 has been amended to recite "the autonomous movement of the robot is a movement on a floor." Applicant respectfully submits that the § 112(a) rejection should be withdrawn” This argument is persuasive in view of the amendments. Therefore, the rejection is not maintained.
Applicant argues that the 35 USC 103 rejections to the claims should not be maintained in view of “However, Groz fails to disclose or teach the above-noted features of "[receiving], from the external device, information relating to the transmitted notification, wherein the information concerns a movable area of the robot, and the information is obtained based on a review, by a person using the external device having received the transmitted notification, of the recognized specific situation, the review including an instruction, which is performed without the person specifying a position and a direction of movement of the robot, the instruction being performed by instructing the robot to recognize, as the movable area, an area that has not been recognized as the movable area by the robot in the recognized specific situation," (emphasis added) as recited in amended claim 1” However, in view of the amendments a new ground of rejection is above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARAH TRAN whose telephone number is (313)446-6642. The examiner can normally be reached 8am-5pm M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.A.T./Examiner, Art Unit 3656
/KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656