DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed on 12/01/2025 with respect to claim(s) 1-15, and 21-25 have been fully considered but are moot in view of new ground of rejection provided below which is necessitated based on Applicant’s amendments to the claims. The new ground of rejection for independent claims are based on Yoon in view of Kuhnel.
The same reasoning as applied to the independent claims above also apply to their
corresponding dependent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5, 13-15, and 21-25 are rejected under 35 U.S.C. 103(a)(1) as being read upon by Yoon et al. (US 2015/0032260 A1) (Hereinafter Yoon) in view of Kuhnel (DE 102016210421 A1, attached English translated copy is used for claim mapping).
Regarding Claim 1, Yoon teaches a robot control method, comprising:
acquiring posture data of a user in response to a posture interaction wakeup instruction (See Fig
18 section 505, 508, 510 discloses capturing image of a user which is considered as posture data upon receiving manipulation instruction input through voice which is considered as a posture interaction wakeup instruction, Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned; an image capturing unit that captures an image viewed from the cleaning robot; a voice input unit to which a user's voice instructions are input; and a controller that obtains the user's motion instructions through the image capturing unit when the user's voice instructions are input through the voice input unit and determines a restricted area and/or a focused cleaning area to based on the user's motion instructions.”, Para [0021] “The obtaining of the motion of the user may include: detecting a hand and a shoulder of the user from the image of the user; and determining coordinates of the hand and the shoulder of the user using distance information of the user.”);
determining, according to the posture data, a target operation region (See at least Para [0022] “The determining of the restricted area and/or the focused cleaning area may include: determining an area instructed by the user based on the coordinates of the hand and the shoulder of the user; and determining the area instructed by the user as the restricted area and/or the focused cleaning area.”, discloses focused cleaning area that is considered as the target operation region which is determined by an area instructed by the user based on the coordinates of the hand and the shoulder of the user which is considered as posture data); …
causing the robot to move to the target operation region so as to perform a set operation task (See at least Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned;…”, discloses the robot travels a space to be cleaned which is construed as a robot moving from a current position to the target operation region so as to perform a set operation task, Fig 18 item 520 shows calculate concentrated cleaning area which is construed as first the area is calculated using sensor data (image, motion) and then the robot is moved to that area for cleaning which is construed as robot moving from a current position to a target operation region so as to perform a set operation task which is cleaning).
Although Yoon teaches an obstacle detecting unit that detects an obstacle in the space to be cleaned (See at least Para [0063] “Referring to FIGS. 2A through 4, the cleaning robot 100 includes … an obstacle detecting unit 150 that detects an obstacle in the space to be cleaned …), he does not explicitly spell out … wherein the target operation region and a position where a robot currently located belong to different rooms;
Kuhnel teaches …
determining, according to the posture data, a target operation region; wherein the target operation region and a position where a robot currently located belong to different rooms (See at least Para [0017] “To enable simple and precise programming in the process of controlling the cleaning robot, the user's signals are given in the form of voice commands and/or gestures and/or radio signals. Depending on the design of the cleaning robot, it can be provided with information of different types and levels of detail, depending on its interfaces/sensors for receiving user signals. As mentioned above, this information includes, for example, the degree of soiling of the areas to be cleaned, information about the type of floor or floor covering, or even just a name of an area or room to be cleaned, which the cleaning robot can identify and process.”, Para [0018] “In another embodiment, the cleaning robot can be provided with the signal or commands for programming the control system by means of gestures, for example of the legs or feet. Simple gestures can contain information about a sequence, direction, or a simplified representation of a driving route for the cleaning robot.”, Para [0025] “Fig. 1 shows a sketch of an area to be cleaned, which includes several rooms 4 with surfaces to be cleaned. The area can, for example, correspond to the area of one floor of a multi-story house, where on each floor the rooms 4 with the areas to be cleaned may have different floor coverings. Accordingly, the individual surfaces to be cleaned can also have different levels of soiling.”);
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Yoon with the teachings of Kuhnel and include the feature of the target operation region and a position where a robot currently located belong to different rooms, thereby enhance efficiency by providing the ability to avoid specific regions on a space (See at least Para [0006] “It is an object of the present invention to provide a method for controlling a cleaning robot and a cleaning robot for carrying out the method in order to enable the most effective and efficient cleaning of surfaces while ensuring easy operation and cost-effective manufacture of a cleaning robot”).
Regarding Claim 2, Yoon teaches all the elements of claim 2. Yoon further teaches the method according to claim 1, wherein the posture interaction wakeup instruction comprises at least one of:
a voice instruction given by the user to wake up a posture interaction function of the robot (See
Fig 18 section 505 discloses manipulation instruction input through voice, Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned; an image capturing unit that captures an image viewed from the cleaning robot; a voice input unit to which a user's voice instructions are input; and a controller that obtains the user's motion instructions through the image capturing unit when the user's voice instructions are input through the voice input unit and determines a restricted area and/or a focused cleaning area to based on the user's motion instructions.”);
a control instruction given by the user through a terminal device to wake up the posture interaction function of the robot; and
a gesture instruction given by the user to wake up the posture interaction function of the robot.
Regarding Claim 3, Yoon teaches all the elements of claim 1. Yoon further teaches the method according to claim 1, wherein the acquiring posture data of a user comprises:
performing three-dimensional measurement on the user through a sensor component mounted to the robot to obtain three-dimensional measurement data (See at least Para [0067] “The image capturing unit 130 may include a three-dimensional camera 131 that is disposed at the front portion of the cleaning robot 100 and captures a three-dimensional image viewed from the cleaning robot 100… As the three-dimensional camera 131, a stereo camera module or a depth sensor module may be employed.”); and
acquiring a space coordinate corresponding to a gesture of the user as the posture data of the user according to the three-dimensional measurement data (See at least [0108] “FIGS. 10A and 10B and FIGS. 11A and 11B illustrate the case in which the cleaning robot of FIG. 1 determines coordinates of an area instructed by the user from the image of the user.”, Para [0113] “Also, the cleaning robot 100 may determine three-dimensional relative coordinates of the shoulder RS and the hand RH of the user U based on the distance d1 between the user U and the cleaning robot 100 and the direction of the user U, the distance d2 between the shoulder RS of the user U and the cleaning robot 100 and the direction of the shoulder RS and the cleaning robot 100 and the distance d3 between the hand RH of the user U and the cleaning robot 100 and the direction of the hand RH. Here, the three-dimensional relative coordinates of the shoulder RS and the hand RH of the user U define coordinates in a three-dimensional relative coordinate system in which the position of the cleaning robot 100 is set as an origin. The three-dimensional relative coordinate system defines a coordinate system in which the cleaning robot 100 is set as an origin, a front direction of the cleaning robot 100 from a cleaning floor is set as a +y-axis, a right direction of the cleaning robot 100 from the cleaning floor is set as an +x-axis and an upward direction of the cleaning robot 100 from the cleaning floor is set as a +z-axis.”).
Regarding Claim 4, Yoon teaches all the elements of claim 3. Yoon further teaches the method according to claim 3, wherein the three-dimensional measurement data comprises an image obtained by shooting the user and a distance between the user and the robot (See at least Para [0113] “Also, the cleaning robot 100 may determine three-dimensional relative coordinates of the shoulder RS and the hand RH of the user U based on the distance d1 between the user U and the cleaning robot 100 and the direction of the user U, the distance d2 between the shoulder RS of the user U and the cleaning robot 100 and the direction of the shoulder RS and the cleaning robot 100 and the distance d3 between the hand RH of the user U and the cleaning robot 100 and the direction of the hand RH…”).
Regarding Claim 5, Yoon teaches all the elements of claim 4. Yoon further teaches the method according to claim 4, wherein the acquiring a space coordinate corresponding to a gesture of the user according to the three-dimensional measurement data comprises:
recognizing the image to obtain posture key points of the user (See Fig 10B shows posture key
points of the user);
determining, from the posture key points, a target key point used to represent the gesture of
the user (See Fig 10B, Fig 11A, Para [0083] “The motion recognition module 193 detects positions of particular portions of the user U, such as a hand and a shoulder of the user U from the three-dimensional image and determines a trajectory of the hand using the detected positions of the hand and the shoulder. The motion recognition module 193 detects the manipulation instructions intended by the user U by comparing the determined trajectory of the hand with motion instructions stored according to various manipulation instructions. In addition, the motion recognition module 193 may detect the position of the space to be cleaned instructed by the user's hand using the detected positions of the hand and the shoulder.”);
determining a distance between the target key point and the robot according to the distance between the user and the robot (See at least Fig 11A, Para [0112] “As illustrated in FIG. 11A, when the shoulder RS and the hand RH of the user U are detected from the three-dimensional image of the user U, the cleaning robot 100 may determine a distance d1 between the user U and the cleaning robot 100, a distance d2 between the shoulder RS of the user U and the cleaning robot 100, and a distance d3 between the hand RH of the user U and the cleaning robot 100 using the distance information obtained by the image capturing unit (see 130 of FIG. 2A)…”); and
determining the space coordinate corresponding to the gesture of the user according to a coordinate of the target key point and the distance between the target key point and the robot (See at least Para [0113] “Also, the cleaning robot 100 may determine three-dimensional relative coordinates of the shoulder RS and the hand RH of the user U based on the distance d1 between the user U and the cleaning robot 100 and the direction of the user U, the distance d2 between the shoulder RS of the user U and the cleaning robot 100 and the direction of the shoulder RS and the cleaning robot 100 and the distance d3 between the hand RH of the user U and the cleaning robot 100 and the direction of the hand RH…”).
Regarding Claim 13, Yoon teaches all the elements of claim 1. Yoon further teaches the method according to claim 1, wherein there is a physical obstacle or virtual obstacle between the target operation region and the region that the position where the robot currently located (See at least Para [0063] “Referring to FIGS. 2A through 4, the cleaning robot 100 includes … an obstacle detecting unit 150 that detects an obstacle in the space to be cleaned …”, Para [0072] “… That is, when the infrared sensor 151 placed at the front portion of the cleaning robot 100 detects the obstacle, it may be determined that the obstacle is present at the front portion of the cleaning robot 100, and when the infrared sensor 151 placed at the right portion of the cleaning robot 100 detects the obstacle, it may be determined that the obstacle is present at the right portion of the cleaning robot 100.”).
Regarding Claim 14, Yoon teaches a robot, comprising a robot body, a sensor component, a controller, and a motion component that are mounted to the robot body (See at least Fig 3 shows a robot body, Fig 2B item 191 - voice recognition module, item 195 - main control module, Para [0081] “The robot controller 190 includes a voice recognition module 191 that detects the user's manipulation instructions through the user's voice based on the user's voice signals obtained by the voice input unit 140, a motion recognition module 193 that detects the user's manipulation instructions according to the user's motion based on the three-dimensional image captured by the image capturing unit 130, and a main control module 195 that controls the operation of the cleaning robot 100 according to the user's manipulation instructions.”, Para [0063]); wherein,
the sensor component is configured to acquire posture data of a user in response to an
operation control instruction of the user (See Fig 18 section 505, 508, 510 discloses capturing image of a user which is considered as posture data upon receiving manipulation instruction input through voice which is considered as an operation control instruction of the user, Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned; an image capturing unit that captures an image viewed from the cleaning robot; a voice input unit to which a user's voice instructions are input; and a controller that obtains the user's motion instructions through the image capturing unit when the user's voice instructions are input through the voice input unit and determines a restricted area and/or a focused cleaning area to based on the user's motion instructions.”, Para [0021] “The obtaining of the motion of the user may include: detecting a hand and a shoulder of the user from the image of the user; and determining coordinates of the hand and the shoulder of the user using distance information of the user.”); and
the controller is configured to determine, according to the posture data of the user, a target operation region (See at least Para [0022] “The determining of the restricted area and/or the focused cleaning area may include: determining an area instructed by the user based on the coordinates of the hand and the shoulder of the user; and determining the area instructed by the user as the restricted area and/or the focused cleaning area.”, discloses focused cleaning area that is considered as the target operation region which is determined by an area instructed by the user based on the coordinates of the hand and the shoulder of the user which is considered as posture data) … , and control the motion component to move to the target operation region so as to perform an operation task (See at least Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned;…”, discloses the robot travels a space to be cleaned which is construed as a robot moving from a current position to the target operation region so as to perform an operation task, Fig 18 item 520 shows calculate concentrated cleaning area which is construed as first the area is calculated using sensor data (image, motion) and then the robot is moved to that area for cleaning which is construed as robot moving from a current position to a target operation region so as to perform an operation task which is cleaning), …
Although Yoon teaches an obstacle detecting unit that detects an obstacle in the space to be cleaned (See at least Para [0063] “Referring to FIGS. 2A through 4, the cleaning robot 100 includes … an obstacle detecting unit 150 that detects an obstacle in the space to be cleaned …), he does not explicitly spell out … wherein the target operation region and a position where a robot currently located belong to different rooms.
Kuhnel teaches … wherein the target operation region and a position where a robot currently located belong to different rooms (See at least Para [0017] “To enable simple and precise programming in the process of controlling the cleaning robot, the user's signals are given in the form of voice commands and/or gestures and/or radio signals. Depending on the design of the cleaning robot, it can be provided with information of different types and levels of detail, depending on its interfaces/sensors for receiving user signals. As mentioned above, this information includes, for example, the degree of soiling of the areas to be cleaned, information about the type of floor or floor covering, or even just a name of an area or room to be cleaned, which the cleaning robot can identify and process.”, Para [0018] “In another embodiment, the cleaning robot can be provided with the signal or commands for programming the control system by means of gestures, for example of the legs or feet. Simple gestures can contain information about a sequence, direction, or a simplified representation of a driving route for the cleaning robot.”, Para [0025] “Fig. 1 shows a sketch of an area to be cleaned, which includes several rooms 4 with surfaces to be cleaned. The area can, for example, correspond to the area of one floor of a multi-story house, where on each floor the rooms 4 with the areas to be cleaned may have different floor coverings. Accordingly, the individual surfaces to be cleaned can also have different levels of soiling.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Yoon with the teachings of Kuhnel and include the feature of the target operation region and a position where a robot currently located belong to different rooms, thereby enhance efficiency by providing the ability to avoid specific regions on a space (See at least Para [0006] “It is an object of the present invention to provide a method for controlling a cleaning robot and a cleaning robot for carrying out the method in order to enable the most effective and efficient cleaning of surfaces while ensuring easy operation and cost-effective manufacture of a cleaning robot”).
Regarding Claim 15, Yoon teaches all the elements of claim 14. Yoon further teaches the robot according to claim 14, wherein the sensor component comprises a depth sensor (See at least Para [0067] “The image capturing unit 130 may include a three-dimensional camera 131 that is disposed at the front portion of the cleaning robot 100 and captures a three-dimensional image viewed from the cleaning robot 100… As the three-dimensional camera 131, a stereo camera module or a depth sensor module may be employed.”).
Regarding Claim 21, Yoon teaches a robot control method, comprising:
acquiring posture data of a user in response to a posture interaction wakeup instruction (See Fig
18 section 505, 508, 510 discloses capturing image of a user which is considered as posture data upon receiving manipulation instruction input through voice which is considered as a posture interaction wakeup instruction, Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned; an image capturing unit that captures an image viewed from the cleaning robot; a voice input unit to which a user's voice instructions are input; and a controller that obtains the user's motion instructions through the image capturing unit when the user's voice instructions are input through the voice input unit and determines a restricted area and/or a focused cleaning area to based on the user's motion instructions.”, Para [0021] “The obtaining of the motion of the user may include: detecting a hand and a shoulder of the user from the image of the user; and determining coordinates of the hand and the shoulder of the user using distance information of the user.”);
determining, according to the posture data, a target operation region (See at least Para [0022] “The determining of the restricted area and/or the focused cleaning area may include: determining an area instructed by the user based on the coordinates of the hand and the shoulder of the user; and determining the area instructed by the user as the restricted area and/or the focused cleaning area.”, discloses focused cleaning area that is considered as the target operation region which is determined by an area instructed by the user based on the coordinates of the hand and the shoulder of the user which is considered as posture data);
…
causing the robot to move to the target operation region so as to perform a set operation task (See at least Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned;…”, discloses the robot travels a space to be cleaned which is construed as a robot moving from a current position to the target operation region so as to perform a set operation task, Fig 18 item 520 shows calculate concentrated cleaning area which is construed as first the area is calculated using sensor data (image, motion) and then the robot is moved to that area for cleaning which is construed as robot moving from a current position to a target operation region so as to perform a set operation task which is cleaning).
However, Yoon does not explicitly spell out …
wherein the target operation region and a current operation region where the robot performing an operation task currently belong to different rooms; and …
Kuhnel teaches …
wherein the target operation region and a current operation region where the robot performing an operation task currently belong to different rooms (See at least Para [0017] “To enable simple and precise programming in the process of controlling the cleaning robot, the user's signals are given in the form of voice commands and/or gestures and/or radio signals. Depending on the design of the cleaning robot, it can be provided with information of different types and levels of detail, depending on its interfaces/sensors for receiving user signals. As mentioned above, this information includes, for example, the degree of soiling of the areas to be cleaned, information about the type of floor or floor covering, or even just a name of an area or room to be cleaned, which the cleaning robot can identify and process.”, Para [0018] “In another embodiment, the cleaning robot can be provided with the signal or commands for programming the control system by means of gestures, for example of the legs or feet. Simple gestures can contain information about a sequence, direction, or a simplified representation of a driving route for the cleaning robot.”, Para [0025] “Fig. 1 shows a sketch of an area to be cleaned, which includes several rooms 4 with surfaces to be cleaned. The area can, for example, correspond to the area of one floor of a multi-story house, where on each floor the rooms 4 with the areas to be cleaned may have different floor coverings. Accordingly, the individual surfaces to be cleaned can also have different levels of soiling.”); and …
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Yoon with the teachings of Kuhnel and include the feature of target operation region and a current operation region where the robot performing an operation task currently being belong to different rooms, thereby enhance efficiency by providing the ability to perform tasks in different regions on a space simultaneously (See at least Para [0006] “It is an object of the present invention to provide a method for controlling a cleaning robot and a cleaning robot for carrying out the method in order to enable the most effective and efficient cleaning of surfaces while ensuring easy operation and cost-effective manufacture of a cleaning robot”).
Regarding Claim 22, Yoon teaches all the elements of claim 1.
However, Yoon does not explicitly spell out the robot control method according to claim 1, wherein the target operation region and a position where the user currently located belong to different rooms.
Kuhnel teaches the robot control method according to claim 1, wherein the target operation region and a position where the user currently located belong to different rooms (See at least Fig 1, Para [0017] “To enable simple and precise programming in the process of controlling the cleaning robot, the user's signals are given in the form of voice commands and/or gestures and/or radio signals. Depending on the design of the cleaning robot, it can be provided with information of different types and levels of detail, depending on its interfaces/sensors for receiving user signals. As mentioned above, this information includes, for example, the degree of soiling of the areas to be cleaned, information about the type of floor or floor covering, or even just a name of an area or room to be cleaned, which the cleaning robot can identify and process.”, Para [0018] “In another embodiment, the cleaning robot can be provided with the signal or commands for programming the control system by means of gestures, for example of the legs or feet. Simple gestures can contain information about a sequence, direction, or a simplified representation of a driving route for the cleaning robot.”, Para [0025] “Fig. 1 shows a sketch of an area to be cleaned, which includes several rooms 4 with surfaces to be cleaned. The area can, for example, correspond to the area of one floor of a multi-story house, where on each floor the rooms 4 with the areas to be cleaned may have different floor coverings. Accordingly, the individual surfaces to be cleaned can also have different levels of soiling.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Yoon with the teachings of Kuhnel and include the feature of the target operation region and a position where the user currently located belong to different rooms, thereby enhance efficiency by providing the ability to avoid specific regions on a space (See at least Para [0006] “It is an object of the present invention to provide a method for controlling a cleaning robot and a cleaning robot for carrying out the method in order to enable the most effective and efficient cleaning of surfaces while ensuring easy operation and cost-effective manufacture of a cleaning robot”).
Regarding Claim 23, Yoon teaches all the elements of claim 1. Yoon further teaches the robot control method according to claim 1, wherein the posture data of the user is acquired by a sensor component mounted on the robot (See at least Para [0081] “The robot controller 190 includes a voice recognition module 191 that detects the user's manipulation instructions through the user's voice based on the user's voice signals obtained by the voice input unit 140, a motion recognition module 193 that detects the user's manipulation instructions according to the user's motion based on the three-dimensional image captured by the image capturing unit 130, and a main control module 195 that controls the operation of the cleaning robot 100 according to the user's manipulation instructions.”), and …
However, Yoon does not explicitly spell out … the target operation region is outside a field of view of the sensor component.
Kuhnel teaches the posture data of the user is acquired by a sensor component mounted on the
robot, and the target operation region is outside a field of view of the sensor component (See at least Para [0017] “To enable simple and precise programming in the process of controlling the cleaning robot, the user's signals are given in the form of voice commands and/or gestures and/or radio signals. Depending on the design of the cleaning robot, it can be provided with information of different types and levels of detail, depending on its interfaces/sensors for receiving user signals. As mentioned above, this information includes, for example, the degree of soiling of the areas to be cleaned, information about the type of floor or floor covering, or even just a name of an area or room to be cleaned, which the cleaning robot can identify and process.”, Para [0018] “In another embodiment, the cleaning robot can be provided with the signal or commands for programming the control system by means of gestures, for example of the legs or feet. Simple gestures can contain information about a sequence, direction, or a simplified representation of a driving route for the cleaning robot.”, Para [0025] “Fig. 1 shows a sketch of an area to be cleaned, which includes several rooms 4 with surfaces to be cleaned. The area can, for example, correspond to the area of one floor of a multi-story house, where on each floor the rooms 4 with the areas to be cleaned may have different floor coverings. Accordingly, the individual surfaces to be cleaned can also have different levels of soiling.”, Para [0027] “In addition, the cleaning robot 1 has at least one further sensor 6 that registers user-specific data based on signals from the user 2. In particular, this additional sensor 6 also makes it possible to identify a specific user 2. The signals of the user-specific data or... Instructions in the form of voice commands, gestures or radio signals from the user are received by the sensors 6 of the cleaning robot 1.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to modify the teachings of Yoon with the teachings of Kuhnel and include the feature of the posture data of the user being acquired by a sensor component mounted on the robot, and the target operation region is outside a field of view of the sensor component, thereby enhance efficiency by providing the ability to avoid specific regions on a space (See at least Para [0006] “It is an object of the present invention to provide a method for controlling a cleaning robot and a cleaning robot for carrying out the method in order to enable the most effective and efficient cleaning of surfaces while ensuring easy operation and cost-effective manufacture of a cleaning robot”).
Regarding Claim 24, Yoon teaches all the elements of claim 14.
However, Yoon does not explicitly spell out the robot according to claim 14, wherein the target operation region and a position where the user currently located belong to different rooms.
Kuhnel teaches the robot according to claim 14, wherein the target operation region and a position where the user currently located belong to different rooms (See at least Fig 1, Para [0017] “To enable simple and precise programming in the process of controlling the cleaning robot, the user's signals are given in the form of voice commands and/or gestures and/or radio signals. Depending on the design of the cleaning robot, it can be provided with information of different types and levels of detail, depending on its interfaces/sensors for receiving user signals. As mentioned above, this information includes, for example, the degree of soiling of the areas to be cleaned, information about the type of floor or floor covering, or even just a name of an area or room to be cleaned, which the cleaning robot can identify and process.”, Para [0018] “In another embodiment, the cleaning robot can be provided with the signal or commands for programming the control system by means of gestures, for example of the legs or feet. Simple gestures can contain information about a sequence, direction, or a simplified representation of a driving route for the cleaning robot.”, Para [0025] “Fig. 1 shows a sketch of an area to be cleaned, which includes several rooms 4 with surfaces to be cleaned. The area can, for example, correspond to the area of one floor of a multi-story house, where on each floor the rooms 4 with the areas to be cleaned may have different floor coverings. Accordingly, the individual surfaces to be cleaned can also have different levels of soiling.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Yoon with the teachings of Kuhnel and include the feature of the target operation region and a position where the user currently located belong to different rooms, thereby enhance efficiency by providing the ability to avoid specific regions on a space (See at least Para [0006] “It is an object of the present invention to provide a method for controlling a cleaning robot and a cleaning robot for carrying out the method in order to enable the most effective and efficient cleaning of surfaces while ensuring easy operation and cost-effective manufacture of a cleaning robot”).
Regarding Claim 25, Yoon teaches all the elements of claim 14. Yoon further teaches the robot according to claim 14, wherein the posture data of the user is acquired by a sensor component mounted on the robot (See at least Para [0081] “The robot controller 190 includes a voice recognition module 191 that detects the user's manipulation instructions through the user's voice based on the user's voice signals obtained by the voice input unit 140, a motion recognition module 193 that detects the user's manipulation instructions according to the user's motion based on the three-dimensional image captured by the image capturing unit 130, and a main control module 195 that controls the operation of the cleaning robot 100 according to the user's manipulation instructions.”), and …
However, Yoon does not explicitly spell out … the target operation region is outside a field of
view of the sensor component.
Kuhnel teaches the posture data of the user is acquired by a sensor component mounted on the
robot, and the target operation region is outside a field of view of the sensor component (See at least Para [0017] “To enable simple and precise programming in the process of controlling the cleaning robot, the user's signals are given in the form of voice commands and/or gestures and/or radio signals. Depending on the design of the cleaning robot, it can be provided with information of different types and levels of detail, depending on its interfaces/sensors for receiving user signals. As mentioned above, this information includes, for example, the degree of soiling of the areas to be cleaned, information about the type of floor or floor covering, or even just a name of an area or room to be cleaned, which the cleaning robot can identify and process.”, Para [0018] “In another embodiment, the cleaning robot can be provided with the signal or commands for programming the control system by means of gestures, for example of the legs or feet. Simple gestures can contain information about a sequence, direction, or a simplified representation of a driving route for the cleaning robot.”, Para [0025] “Fig. 1 shows a sketch of an area to be cleaned, which includes several rooms 4 with surfaces to be cleaned. The area can, for example, correspond to the area of one floor of a multi-story house, where on each floor the rooms 4 with the areas to be cleaned may have different floor coverings. Accordingly, the individual surfaces to be cleaned can also have different levels of soiling.”, Para [0027] “In addition, the cleaning robot 1 has at least one further sensor 6 that registers user-specific data based on signals from the user 2. In particular, this additional sensor 6 also makes it possible to identify a specific user 2. The signals of the user-specific data or... Instructions in the form of voice commands, gestures or radio signals from the user are received by the sensors 6 of the cleaning robot 1.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective
filing date of the claimed invention to modify the teachings of Yoon with the teachings of Kuhnel and include the feature of the posture data of the user being acquired by a sensor component mounted on the robot, and the target operation region is outside a field of view of the sensor component, thereby enhance efficiency by providing the ability to avoid specific regions on a space (See at least Para [0006] “It is an object of the present invention to provide a method for controlling a cleaning robot and a cleaning robot for carrying out the method in order to enable the most effective and efficient cleaning of surfaces while ensuring easy operation and cost-effective manufacture of a cleaning robot”).
19. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon et al. (US 2015/0032260 A1) (Hereinafter Yoon) in view of Kuhnel (DE 102016210421 A1, attached English translated copy is used for claim mapping), and further in view of Shin et al. (US 2016/0154996 A1) (Hereinafter Shin).
20. Regarding Claim 6, Yoon teaches all the elements of claim 3.
However, Yoon does not explicitly spell out the method according to claim 3, wherein the determining, according to the posture data, a target operation region comprises:
determining, according to the space coordinate corresponding to the gesture of the user, a target operation direction; and
determining, from a candidate operation region, an operation region adapted to the target operation direction as the target operation region.
Shin teaches the method according to claim 3, wherein the determining, according to the posture data, a target operation region comprises:
determining, according to the space coordinate corresponding to the gesture of the user, a target operation direction (See at least Para [0063] “In this embodiment, when the arm angle of the user expressed by the user's gesture is about 90 degrees, for example, it may signify a command in which the robot cleaner moves to an area defined in a direction in which the user's arm points…”); and
determining, from a candidate operation region, an operation region adapted to the target operation direction as the target operation region (See at least Para [0062] “…Referring to FIG. 8, map information, in which a plurality of areas is divided, and the present position information on the map of the robot cleaner 1 may be stored in the memory 29. Then, when the gesture recognition command is input in the voice recognition device 40, the voice recognition device 40 may recognize the voice command generation position.”, Para [0063] “In this embodiment, when the arm angle of the user expressed by the user's gesture is about 90 degrees, for example, it may signify a command in which the robot cleaner moves to an area defined in a direction in which the user's arm points. For example, as illustrated in FIG. 8, a plurality of areas W, X, Y, and Z may be stored in the map information of the memory 29. The voice command generation position and the present position of the robot cleaner may be determined.”, Para [0064] “For example, when the robot cleaner 1 is currently disposed on an area X, and the user's gesture expresses that the left arm is directed at an angle of about 90 degrees in the left direction of the user, the robot cleaner 1 may recognize an area Y existing at the right side thereof as an area to which the robot cleaner 1 itself moves. Then, the robot cleaner 1 may move to the area Y. As the robot cleaner knows coordinate information of the present position thereof and coordinate information of the area Y, the robot cleaner 1 may know a coordinate of a center of the area Y to move to the center of the area Y.”).
Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yoon with the teachings of Shin and include the feature of determining, according to the space coordinate corresponding to the gesture of the user, a target operation direction and determining, from a candidate operation region, an operation region adapted to the target operation direction as the target operation region in order to select from a region an area pointed by the user as a target cleaning area, thereby providing cleaning operation to a precise location.
21. Claim(s) 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Yoon et al. (US 2015/0032260 A1) (Hereinafter Yoon) in view of Kuhnel (DE 102016210421 A1, attached English translated copy is used for claim mapping), Shin et al. (US 2016/0154996 A1) (Hereinafter Shin), and further in view of Tanaka (JP2018109911A).
22. Regarding Claim 7, Yoon teaches all the elements of claim 6.
However, Yoon does not explicitly spell out the method according to claim 6, wherein the
determining, according to the space coordinate corresponding to the gesture of the user, a target operation direction comprises:
performing straight line fitting according to the space coordinate corresponding to the gesture of the user to obtain a space straight line; and
determining a direction of extension of the space straight line to an end of the gesture of the user as the operation direction.
Tanaka teaches the method according to claim 6, wherein the determining, according to the
space coordinate corresponding to the gesture of the user, a target operation direction comprises:
performing straight line fitting according to the space coordinate corresponding to the gesture of the user to obtain a space straight line (See at least Fig 2, Fig 3, Fig 4, Para [0035] “…Specifically, the detection operation specifying unit 117 specifies the indicated position based on the angle θ 1 formed by the center line L 1 and the straight line obtained by extending the motion vector L 2.”, discloses motion vector which is construed as the gesture of the user); and
determining a direction of extension of the space straight line to an end of the gesture of the user as the operation direction (See at least Fig 2, Fig 3, Fig 4, Para [0035] “When the motion vector detection unit 115 specifies the outer shape of at least a part of the object, the detection operation specifying unit 117 specifies the indicated position based on the direction in which the object of the specified outer shape moves. For example, the detection operation specifying unit 117 specifies the indicated position based on the direction in which the tip of the finger specified by the motion vector detection unit 115 moves. Further, the detection operation specifying unit 117 may specify the indicated position based on the angle of the direction in which the object moves with respect to the screen of the display unit 12. Specifically, the detection operation specifying unit 117 specifies the indicated position based on the angle θ 1 formed by the center line L 1 and the straight line obtained by extending the motion vector L 2.”).
Therefore it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yoon with the teachings of Tanaka and include the feature of straight line fitting according to the space coordinate corresponding to the gesture of the user to obtain a space straight line and determining a direction of extension of the space straight line to an end of the gesture of the user as the operation direction, thereby providing the gesture direction precisely.
23. Regarding Claim 8, modified Yoon teaches all the elements of claim 7.
However, Yoon does not explicitly spell out the method according to claim 7, wherein the determining, from a candidate operation region, an operation region adapted to the target operation direction as the target operation region comprises:
calculating an intersection position of the space straight line and a plane where the candidate operation region is located; and
determining, according to the intersection position, the target operation region.
Tanaka teaches the method according to claim 7, wherein the determining, from a candidate operation region, an operation region adapted to the target operation direction as the target operation region comprises:
calculating an intersection position of the space straight line and a plane where the candidate operation region is located (See at least Para [0073] “When it is determined that implementation of fixed position calculation is to be performed, the detection operation specifying unit 117 specifies, in the case where the direction in which the object moves is the first range 8 a, in the direction in which the object moves with the tip position of the object as a starting point (X 7, Y 7, Z 7), which is the intersection of the extended line and the display unit 12, as the designated position. When the direction in which the object moves is the second range that is a second range different from the first range, the detection operation specifying unit 117 also specifies the direction in which the object extends from the tip position of the object in the direction orthogonal to the display unit 12 And specifies the intersection of the line and the display unit 12 as the user designated position P 8 (X 2, Y 2, Z 7). When the line extended in the direction in which the object moves from the tip position of the object does not intersect the display area of the display unit 12, the detection operation specifying unit 117 is orthogonal to the display unit 12 with the tip position of the object as a starting point And the intersection point between the line extending in the direction and the display section 12 is specified as the pointed position.”, Para [0059] FIG. 9 is a diagram for explaining calculation on the ZY plane according to the second embodiment of the present invention. The control unit 11 calculates Y 7 which is the Y coordinate of the user designated position based on the unrecognized distance L 5, the recognition boundary intersection Y 6, and the fingertip position P 2 (Z 2, Y 2) obtained by calculation on the XZ plane . Then, the control unit 11 calculates the user designated position P 7 (Z 7, Y 7) on the ZY plane. The detection operation specifying unit 117 specifies the intersection of the line extended from the fingertip position P 2 (X 2, Y 2, Z 2), which is the front end of the object in the direction in which the object moves, and the display device 1 to the user designated position P 7 (X 7, Y 7 , Z undefined 7).”); and
determining, according to the intersection position, the target operation region (See at least Para [0073] “When it is determined that implementation of fixed position calculation is to be performed, the detection operation specifying unit 117 specifies, in the case where the direction in which the object moves is the first range 8 a, in the direction in which the object moves with the tip position of the object as a starting point (X 7, Y 7, Z 7), which is the intersection of the extended line and the display unit 12, as the designated position. When the direction in which the object moves is the second range that is a second range different from the first range, the detection operation specifying unit 117 also specifies the direction in which the object extends from the tip position of the object in the direction orthogonal to the display unit 12 And specifies the intersection of the line and the display unit 12 as the user designated position P 8 (X 2, Y 2, Z 7). When the line extended in the direction in which the object moves from the tip position of the object does not intersect the display area of the display unit 12, the detection operation specifying unit 117 is orthogonal to the display unit 12 with the tip position of the object as a starting point And the intersection point between the line extending in the direction and the display section 12 is specified as the pointed position.”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yoon with the teachings of Tanaka and include the feature of specifying the target operation region by calculating an intersection position of the space straight line and a plane where the candidate operation region is located, thereby determining the target operation region precisely.
24. Claim(s) 9 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Yoon et al. (US 2015/0032260 A1) (Hereinafter Yoon) in view of in view of Kuhnel (DE 102016210421 A1, attached English translated copy is used for claim mapping), Shin et al. (US 2016/0154996 A1) (Hereinafter Shin), Tanaka (JP2018109911A), and further in view of Li (CN106805856A).
25. Regarding Claim 9, modified Yoon teaches all the elements of claim 8.
However, Yoon does not explicitly spell out the method according to claim 8, wherein the determining, according to the intersection position, the target operation region comprises any one of the following steps:
if the intersection position is within a known operation region of the robot, determining the operation region where the intersection position is located as the target operation region;
if the intersection position is not within the known operation region of the robot, and an included angle between the space straight line and the plane is greater than a set angle threshold, determining, from the known operation region, an operation region closest to the position where the robot currently located in the operation direction as the target operation region; and
if the intersection position is not within the known operation region of the robot, and the included angle between the space straight line and the plane is less than or equal to the angle threshold, searching the operation direction for the target operation region according to the intersection position.
Tanaka teaches the method according to claim 8, wherein the determining, according to the intersection position, the target operation region comprises any one of the following steps:
if the intersection position is within a known operation region of the robot, determining the operation region where the intersection position is located as the target operation region;
if the intersection position is not within the known operation region of the robot, and an included angle between the space straight line and the plane is greater than a set angle threshold, determining, from the known operation region, … (See at least Para [0070] “…when the angle at which an object such as a finger or the like enters exceeds the threshold value, the control unit 11 of the display device 1 calculates the coordinates indicated by the user based on the moving direction of the object, or converts the coordinates to the motion at the fixed position Or not.”); and
if the intersection position is not within the known operation region of the robot, and the included angle between the space straight line and the plane is less than or equal to the angle threshold, searching the operation direction for the target operation region according to the intersection position.
Li teaches … an operation region closest to the position where the robot currently located in
the operation direction as the target operation region (See at least Page 5 Lines 28-31 “the cleaning robot 100 is located at the boundary of the area to be cleaned A, and the cleaning robot
100, looking for the sub-region B closest to its location in the second row based on its current location, and then looking for the nearest point in the sub-region B as the starting point to sweep the second row of the child Area B”); and …
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yoon with the teachings of Li and Tanaka and include the feature of determining, from the known operation region, an operation region closest to the current position of the robot in the operation direction as the target operation region when an included angle between the space straight line and the plane is greater than a set angle threshold which gives conditions which needs to be met, thereby determining the target operation region precisely.
26. Regarding Claim 12, modified Yoon teaches all the elements of claim 9. Yoon further teaches
the method according to claim 9, wherein the causing the robot to move to the target operation region so as to perform a set operation task comprises:
if the target operation region is within the known operation region, planning a path to the target
operation region according to a navigation map corresponding to the known operation region (See at least Para [0131] “When the absolute coordinates of the focused cleaning areas C1 and C2 and the restricted areas W1 and W2 are determined, the portable mobile terminal 200 may display a map of the space to be cleaned and may display positions of the focused cleaning areas C1 and C2 and the restricted areas W1 and W2 on the map of the space to be cleaned.”); and
causing the robot to move to the target operation region according to the path to the target operation region (See at least Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned;…”).
27. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon et al. (US 2015/0032260 A1) (Hereinafter Yoon) in view of Kuhnel (DE 102016210421 A1, attached English translated copy is used for claim mapping), Shin et al. (US 2016/0154996 A1) (Hereinafter Shin), Tanaka (JP2018109911A), Li (CN106805856A), and further in view of Kim et al. (US 2018/0217611 A1) (Hereinafter Kim).
28. Regarding Claim 10, modified Yoon teaches all the elements of claim 9.
However, Yoon does not explicitly spell out the method according to claim 9, wherein the
searching the operation direction for the target operation region according to the intersection position comprises:
causing the robot to move to the operation direction until encountering a target obstacle;
causing the robot to move to a direction of approaching the intersection position along an edge of the target obstacle until detecting an entrance; and
determining an operation region that the entrance belongs to as the target operation region, the operation region that the entrance belongs to being not within the known operation region.
Kim teaches the method according to claim 9, wherein the searching the operation direction
for the target operation region according to the intersection position comprises:
causing the robot to move to the operation direction until encountering a target obstacle (See at least [0098] “For example, the cleaning robot 100 disposed at a certain position in the cleaning space A may move in any direction as illustrated in FIG. 1. When the cleaning robot 100 encounters an obstacle O such as a wall surface and a piece of furniture while moving, the cleaning robot 100 may move along an outer edge of the obstacle O.”);
causing the robot to move to a direction of approaching the intersection position along an edge of the target obstacle until detecting an entrance (See at least [0106] “For example, when the cleaning robot 100 moving along the outer edge of the obstacle O recognizes the first entrance E1, the cleaning robot 100 may recognize the first room R1 on the basis of the recognized first entrance E1. Also, the cleaning robot 100 may set the first room R1 as a first cleaning area A1 and clean the first cleaning area A1 before cleaning other areas of the cleaning space A.”); and
determining an operation region that the entrance belongs to as the target operation region, the operation region that the entrance belongs to being not within the known operation region (See at least Para [0008] “The controller may determine a position of an entrance of the cleaning area while the main body moves, and sets the cleaning area on the basis of the determined position of the entrance...”, Para [0040] “In accordance with one aspect of the present disclosure, a cleaning robot includes a main body, a driver configured to move the main body, a cleaner configured to perform cleaning, and a controller configured to set an area partitioned by an entrance as a cleaning area when the entrance is detected while the main body moves, and clean the cleaning area.”, discloses setting an area partitioned by an entrance as a cleaning area which is construed as not within the known operation region since it is being set up as a cleaning area).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yoon with the teachings of Kim and include the feature of causing the robot to move to the operation direction until encountering a target obstacle, moving to a direction of approaching the intersection position along an edge of the target obstacle until detecting an entrance and determining an operation region that the entrance belongs to as the target operation region, the operation region that the entrance belongs to being not within the known operation region, thereby performing cleaning operation thoroughly including the entrance.
29. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Yoon et al. (US 2015/0032260 A1) (Hereinafter Yoon) in view of Kuhnel (DE 102016210421 A1, attached English translated copy is used for claim mapping), Shin et al. (US 2016/0154996 A1) (Hereinafter Shin), Tanaka (JP2018109911A), Li (CN106805856A), and further in view of Munich et al. (US 2021/0124354 A1) (Hereinafter Munich).
30. Regarding Claim 11, modified Yoon teaches all the elements of claim 9. Yoon further teaches the method according to claim 9, further comprising:
after the target operation region is found in the operation direction, performing the operation task in the target operation region (See at least Para [0022] “The determining of the restricted area and/or the focused cleaning area may include: determining an area instructed by the user based on the coordinates of the hand and the shoulder of the user; and determining the area instructed by the user as the restricted area and/or the focused cleaning area.”, discloses focused cleaning area that is considered as the target operation region which is determined by an area instructed by the user based on the coordinates of the hand and the shoulder of the user which is considered as posture data, Para [0011] “In accordance with one aspect of the present disclosure, a cleaning robot that performs cleaning while travelling a space to be cleaned, the cleaning robot includes: a travelling unit that moves the cleaning robot; a cleaning unit that cleans the space to be cleaned;…”), and …
Although Yoon teaches plan view (map) of the space to be cleaned (Para [0129] “The portable mobile terminal 200 may display positions of the focused cleaning areas C1 and C2 and the restricted areas W1 and W2 in a plan view (map) of the space to be cleaned.”, Para [0131], Para [0141], Para [0142], [0156], [015], [0165], [0166]), he does not explicitly spell out updating a navigation map corresponding to the known operation region according to a trajectory formed by performing the operation task.
Munich teaches … updating a navigation map corresponding to the known operation region according to a trajectory formed by performing the operation task (See at least Para [0059] “…For example, the map is a persistent map that is usable and updateable by the controller 109 of the robot 100 from one mission to another mission to navigate the robot 100 about the floor surface 10.”, Para [0087] “…The robot 100 can collect mapping data in each cleaning mission and can update the map constructed at the operation 208 as well as the labels on the map provided at the operation 208...”).
Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Yoon with the teachings of Munich and include the feature of updating a navigation map corresponding to the known operation region according to a trajectory formed by performing the operation task, thereby providing more efficient cleaning by allowing the robot to accurately navigate and avoid unnecessary backtracking.
Conclusion
31. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
32. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHEDA HOQUE whose telephone number is (571)270-5310. The examiner can normally be reached Monday-Friday 8:00 am- 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ramon Mercado can be reached on 571-270-5744. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAHEDA HOQUE/
Examiner, Art Unit 3658
/Ramon A. Mercado/Supervisory Patent Examiner, Art Unit 3658