DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Amendments filed 09/10/2025 have been entered. Claims 1-11 are pending.
Applicant’s arguments with respect to claims 1-11 under 35 U.S.C. in view of Asahara et al. (US 2018/0215040 A1) have been considered but are not persuasive.
In response to Applicant’s arguments that Asahara fails to disclose “determining a current operating mode of the robot”, Asahara discloses the robot traveling along a target path (indicating a movement mode), and then locate the target object at the grasping/working point (indicating an interaction mode). One of ordinary skill in the art can recognize that when the robot performs traveling, the robot can determine itself to be in a traveling mode, and when the robot performs grasping of a target object, the robot can determine it self to be in an interaction mode. The moving start point 621 and the grasping/working point 622 of Asahara further serve as a determination for a moving mode and an interaction mode, respectively.
In response to Applicant’s arguments that Asahara does not teach “determining a control instruction corresponding to the operating mode based on detection data obtained by the detection sensor” (pages 9-10 of Remarks), the moving start point 621 and the grasping/working point 622 of Asahara serve as a determination as to whether the robot is in a movement mode, i.e., by determining whether the robot is located at point 621, or the robot is in interaction mode, i.e. by determining whether the robot is located at point 622. Hence, when the robot is located at point 621 (movement mode), the robot determines a control instruction to move the robot along a path ([0062] “The controller 200 controls the moving robot 100 in such a way that the moving robot 100 moves from the moving start point 621”), and when the robot is located at point 622 (interaction mode), the robot determines a control instruction to locate and grab a target object ([0062] “The controller 200 controls the moving robot 100 in such a way that the moving robot 100 ... reaches a grasping/working point 622 opposed to the shelf 612.”; [0043] “The processing of the robot 100 ... recognizing the conveyance object 613, which is a target object, from articles placed on the shelf 612 and grasping the conveyance object 613 that has been recognized”).
Applicant’s arguments regarding the limitations “the detection sensor is arranged on the manipulator mechanism, when the robot is in the movement mode, controlling the robot to move the manipulator mechanism to direct a detection range of the detection sensor at the environment to collect environmental information; when the robot is in the interaction mode, controlling the robot to move the manipulator mechanism to direct the detection range of the detection sensor at the target object to collect target object information” are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/10/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3 and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Asahara et al. (US 2018/0215040 A1), in view of Bradski et al. (US 2016/0221187 A1).
Regarding claim 1, Asahara teaches:
A robot control method, applicable to a robot (Fig. 1; [0025] “moving robot 100”), wherein the robot comprises a robot body (Fig. 1; [0029] “arm 121”) and a manipulator mechanism (Fig. 1; [0029] “hand 124”) arranged on the robot body (Fig. 1 shows the hand 124 supported by the robot body/arm part 121), the manipulator mechanism is configured to transport a target object ([0030] “The hand 124 includes a grasping mechanism so that it can grasp a conveyance object as a work object of the moving robot 100.”), and a detection sensor (Fig. 1; [0027] “cameras 114”) ..., the method comprising:
determining a current operating mode of the robot (Fig. 3; [0062] “The moving robot 100 ... executes the task program 241 in a condition described in the tag 604 from which information has been read out.”; [0063] – Fig. 3 shows the starting point 621 indicates a movement mode and the grasping/working point 622 indicates an interaction mode), wherein the operating mode comprises a movement mode (Fig. 3; [0064] “a task of reciprocating between the moving start point 621 and the grasping/working point 622”) and an interaction mode (Fig. 3; [0064] “a task of grasping at the grasping/working point 622”), and the robot is configured to move according to a target path in the movement mode (Fig. 3; [0043] “The processing of the robot 100 finding a moving path while avoiding obstacles and moving along the moving path using information obtained from the camera 114, the microphone 115 and the like”) and locate the target object in the interaction mode ([0043] “recognizing the conveyance object 613, which is a target object, from articles placed on the shelf 612 and grasping the conveyance object 613 that has been recognized”);
determining a control instruction corresponding to the operating mode ([0062] “The controller 200 controls the moving robot 100 in such a way that the moving robot 100 moves from the moving start point 621, passes a path P while avoiding obstacles, and reaches a grasping/working point 622 opposed to the shelf 612.”) based on detection data obtained by the detection sensor ([0043] “The processing of the robot 100 finding a moving path while avoiding obstacles and moving along the moving path using information obtained from the camera 114, the microphone 115 and the like, recognizing the conveyance object 613, which is a target object, from articles placed on the shelf 612 and grasping the conveyance object 613 that has been recognized”); and
controlling the robot based on the control instruction ([0062] “The controller 200 controls the moving robot 100 in such a way that the moving robot 100 moves from the moving start point 621, passes a path P while avoiding obstacles, and reaches a grasping/working point 622 opposed to the shelf 612.”).
Asahara does not specifically teach the detection sensor is arranged on the manipulator mechanism, when the robot is in the movement mode, controlling the robot to move the manipulator mechanism to direct a detection range of the detection sensor at the environment to collect environmental information; when the robot is in the interaction mode, controlling the robot to move the manipulator mechanism to direct the detection range of the detection sensor at the target object to collect target object information.
However, in the same field of endeavor, Bradski teaches:
a detection sensor is arranged on the manipulator mechanism (Fig. 1A; [0043] “The sensing system 130 may use one or more sensors attached to a robotic arm 102 , such as sensor 106”),
when the robot is in the movement mode, controlling the robot to move the manipulator mechanism to direct a detection range of the detection sensor at the environment to collect environmental information ([0043] “The sensing system 130 may use one or more sensors attached to a robotic arm 102, such as sensor 106 and sensor 108, which may be 2D sensors and/or 3D depth sensors that sense information about the environment as the robotic arm 102 moves ... In further examples, scans from one or more 2D or 3D sensors with ... one or more sensors mounted on a robotic arm, such as sensor 106 ..., may be integrated to build up a digital model of the environment, including the sides, floor, ceiling, and/or front wall of a truck or other container.”);
when the robot is in the interaction mode, controlling the robot to move the manipulator mechanism to direct the detection range of the detection sensor at the target object to collect target object information ([0061] “In further examples, a facade may be constructed from boxes, for instance to plan in what order the boxes should be picked up. For instance, as shown in FIG. 2C, box 222 may be identified by the robotic device as the next box to pick up. Box 222 may be identified within a facade representing a front wall of the stack of boxes 220 constructed based on sensor data collected by one or more sensors, such as sensor 106...”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asahara to include a detection sensor arranged on the manipulator mechanism, and when the robot is in the movement mode, controlling the robot to move the manipulator mechanism to direct a detection range of the detection sensor at the environment to collect environmental information, when the robot is in the interaction mode, controlling the robot to move the manipulator mechanism to direct the detection range of the detection sensor at the target object to collect target object information, as taught by Bradski. Such modification allows the robot to acquire information of the target object to be picked up.
Regarding claim 2, Asahara further teaches:
wherein the determining a control instruction corresponding to the operating mode based on detection data obtained by the detection sensor comprises:
determining an obstacle object on the target path based on the detection data if the current operating mode of the robot is the movement mode ([0027] “The cart part 110 includes various sensors for detecting obstacles and recognizing an ambient environment.”), wherein the control instruction is used to control the robot to avoid the obstacle object ([0062] “The controller 200 controls the moving robot 100 in such a way that the moving robot 100 moves from the moving start point 621, passes a path P while avoiding obstacles”); or
determining pose position information of the target object based on the detection data if the current operating mode of the robot is the interaction mode ([0043] “recognizing the conveyance object 613, which is a target object, from articles placed on the shelf 612 and grasping the conveyance object 613 that has been recognized”), wherein the control instruction is used to control the robot to take or place the target object ([0063] “The moving robot 100 finds the conveyance object 613 from the shelf 612, operates the arm part 120, and grasps the conveyance object 613 at the grasping/working point 622.”).
Regarding claim 3, Asahara further teaches:
wherein if the current operating mode of the robot is the movement mode, the method further comprises: controlling a detection direction of the detection sensor on the robot to be directed at a current movement direction of the robot (Fig. 1 shows the two cameras are arranged to face the front of the robot 100, which is a moving direction of the robot; [0027] “Two of these sensors are cameras 114 installed in the front of the base 111. Each of the cameras 114 includes, for example, a CMOS image sensor, and transmits an image signal that it has captured to a controller described later.”).
Regarding claim 10, Asahara further teaches:
A robot (Fig. 1; [0025] “moving robot 100”), comprising: a robot body (Fig. 1; [0029] “arm 121”), a manipulator mechanism (Fig. 1; [0029] “hand 124”) arranged on the robot body (Fig. 1 shows the hand 124 supported by the robot body/arm part 121), a memory ([0035] “memory 240”), and at least one processor ([0031] “A controller 200 is, for example, a CPU”), wherein the manipulator mechanism is configured to transport a target object ([0030] “The hand 124 includes a grasping mechanism so that it can grasp a conveyance object as a work object of the moving robot 100.”), and ... the memory is configured to store computer-executable instructions ([0035] “The memory 240 stores a control program for controlling the moving robot 100, various parameter values, functions, lookup tables and the like used for control.”); and the at least one processor executes the computer-executable instructions stored in the memory, so that the at least one processor performs the robot control method according to claim 1 ([0031] “A controller 200 is, for example, a CPU, and executes various operations related to the control of the moving robot 10 by transmitting or receiving information such as commands and sampling data to or from a driving wheel unit 210, an arm unit 220, a sensor unit 230, a memory 240, a tag reader 160 and the like.”).
Asahara does not specifically teach the detection sensor is arranged on the manipulator mechanism.
However, in the same field of endeavor, Bradski teaches:
a detection sensor is arranged on the manipulator mechanism (Fig. 1A; [0043] “The sensing system 130 may use one or more sensors attached to a robotic arm 102 , such as sensor 106”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asahara to include a detection sensor arranged on the manipulator mechanism, as taught by Bradski. Such modification allows the robot to acquire information of the target object to be picked up.
Regarding claim 11, Asahara further teaches:
A non-transitory computer-readable storage medium, storing computer-executable instructions ([0035] “The memory 240 stores a control program for controlling the moving robot 100, various parameter values, functions, lookup tables and the like used for control.”; [0078]), wherein when a processor executes the computer-executable instructions, the robot control method according to claim 1 is implemented ([0031] “A controller 200 is, for example, a CPU, and executes various operations related to the control of the moving robot 10 by transmitting or receiving information such as commands and sampling data to or from a driving wheel unit 210, an arm unit 220, a sensor unit 230, a memory 240, a tag reader 160 and the like.”).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Asahara, in view of Bradski, and further in view of Bier (US 2018/0180719 A1).
Regarding claim 4, neither Asahara nor Bradski specifically teaches wherein after the controlling a detection direction of the detection sensor on the robot to be directed at a current movement direction of the robot, the method further comprises: determining whether the detection direction of the detection sensor is blocked by the robot body; and adjusting the detection direction of the detection sensor if the detection direction is blocked, so that the adjusted detection direction is not blocked by the robot body.
However, in the same field of endeavor, Bier teaches:
wherein after the controlling a detection direction of the detection sensor on the robot to be directed at a current movement direction of the robot, the method further comprises:
determining whether the detection direction of the detection sensor is blocked ([0057] “Sensor data is received at 330 and processed at 340 to determine any obstructions within the field of view (as will be discussed in more detail with regard to FIGS. 6 and 7).”); and
adjusting the detection direction of the detection sensor if the detection direction is blocked, so that the adjusted detection direction is not blocked ([0024] “In general, the system 100 detects when a field of view of a sensing device is obstructed and when obstructed, controls one or more actuator devices of a sensor mount associated with the sensing device to adjust an actual position of the sensing device to a desired position, such that the field of view of the sensing device is no longer obstructed.”; [0058] “If, however, obstructions exist at 350, a desired position is determined based on the determined location of the obstruction, and for example, a predefined offset, as discussed above at 360. Control signals are then determined based on the desired position and the actual position of the sensor and the control signals are generated to control the sensor mount at 370.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asahara, in view of Bradski, to determine whether the detection direction of the detection sensor is blocked, and adjust the detection direction of the detection sensor if the detection direction is blocked, so that the adjusted detection direction is not blocked, as taught by Bier. Such modification allows the detection sensor to acquire sensor data at a desired location such that the field of view of the detection sensor is no longer obstructed, as stated by Bier in [0024].
Bier does not explicitly disclose the detection sensor is blocked by the robot body.
However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asahara, in view of Bradski and Bier, to select the obstruction to be the robot body, since one of ordinary skill in the art would have been capable of applying Bier’s technique of determining an obstruction to determine a robot body as an obstruction and the results would have been predictable to one of ordinary skill in the art.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Asahara, in view of Bradski, and further in view of Kim et al. (US 2017/0173796 A1).
Regarding claim 5, neither Asahara nor Bradski specifically teaches wherein if the current operating mode of the robot is the interaction mode, the method further comprises: controlling a detection direction of the detection sensor on the robot so that scanning is performed within a preset angle range; and determining, based on a detection result after the scanning, that the detection direction of the detection sensor is directed at the target object.
However, in the same field of endeavor, Kim teaches:
controlling a detection direction of the detection sensor on the robot so that scanning is performed within a preset angle range (Figs. 1-3; [0080] “the target object-sensing unit 500 may include the detection sensor 510 and the scan unit 520. In some embodiments, the scan unit 520 may be configured to rotate the detection sensor 510 on an X-Y plane by a specific angle range.”); and
determining, based on a detection result after the scanning, that the detection direction of the detection sensor is directed at the target object ([0080] “This may allow the target object-sensing unit 500 to obtain the target object position information I4 on a position of the target object 30 in the scan region S (in step S21 of FIG. 11). Here, the target object position information I4 may include X- and Y-coordinates of the target object 30.”; [0081]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asahara, in view of Bradski, to control a detection direction of the detection sensor on the robot so that scanning is performed within a preset angle range; and determine, based on a detection result after the scanning, that the detection direction of the detection sensor is directed at the target object, as taught by Kim. Such modification allows the detection sensor to obtain the target object position information in the scanning region.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Asahara, in view of Bradski, and further in view of Xiong et al. (US 2019/0196489 A1).
Regarding claim 6, Asahara does not specifically teach wherein the determining a control instruction corresponding to the operating mode based on detection data obtained by the detection sensor comprises: determining pose position information of the target object based on the detection data if the current operating mode of the robot is the interaction mode, wherein the target object is a charging pile; switching the current operating mode of the robot to the movement mode; and adjusting a pose of the robot based on the pose position information in the movement mode, so that the robot is connected to the charging pile for charging.
However, in the same field of endeavor, Xiong teaches:
wherein the determining a control instruction corresponding to the operating mode based on detection data obtained by the detection sensor comprises:
determining pose position information of the target object based on the detection data if the current operating mode of the robot is the interaction mode ([0059] “S11: obtaining a linear distance between a charging portion of the robot and a charging station of a charging device, if a charging instruction is detected.” – Obtaining position information of the charging station indicates an interaction mode in which the robot locates the charging station), wherein the target object is a charging pile ([0059] “charging station”);
switching the current operating mode of the robot to the movement mode ([0067] “In step S12, in the coordinate system, a straight line formed between the preset target position and the position of the charging station is parallel to the longitudinal axis. The preset target position is a position at which the robot adjusts the posture for charging.” – When the robot is located at the preset target position, the robot switches from the interaction mode in which the robot locates the charging station to the movement mode in which the robot starts adjusting posture for charging); and
adjusting a pose of the robot based on the pose position information in the movement mode, so that the robot is connected to the charging pile for charging ([0067] “Before an electrical connection relationship is established between the robot and the charging station, the robot is moved to the preset target position first, and performs posture adjustment in the target position, so that the charging portion can be connected with the charging station.”; [0081] “rotating the robot in situ at the preset target position to a position at which the charging interface can match the charging station, that is, if the robot is moved from the preset target position to the charging station in the current posture, the charging interface can be directly connected with a power supply part of the charging station.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asahara, in view of Bradski, to determine pose position information of the target object based on the detection data if the current operating mode of the robot is the interaction mode, wherein the target object is a charging pile; switch the current operating mode of the robot to the movement mode; and adjust a pose of the robot based on the pose position information in the movement mode, so that the robot is connected to the charging pile for charging, as taught by Xiong. Such modification allows the robot to be directed connected with a power supply part of the charging station, thus providing power to the robot.
9. Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Asahara, in view of Bradski, and further in view of Kobayashi et al. (US 2017/0151673 A1).
Regarding claim 7, Asahara further teaches the detection sensor is configured to obtain image information of a target position, and the target position comprises: a position on the target path corresponding to the robot in the movement mode ([0043] “The processing of the robot 100 finding a moving path while avoiding obstacles and moving along the moving path using information obtained from the camera 114”).
Asahara does not specifically teach wherein the manipulator mechanism comprises a support, a tray, and a telescopic arm, wherein the tray is located in the support, the tray is configured for the target object to be placed, the telescopic arm is located on the support, and the telescopic arm is configured to push the target object placed on the tray out of the tray or pull the target object onto the tray; and the detection sensor is arranged below the tray, and is configured to obtain image information of a target position within different image capturing ranges, and the target position comprises: a taking or placing position of the target object in the interaction mode.
However, in the same field of endeavor, Kobayashi teaches:
wherein the manipulator mechanism comprises a support (Annotated Fig. 3 below shows a support platform), a tray (Fig. 3; [0032] “tray 1”), and a telescopic arm (Fig. 3; [0036] “manipulator 10”), wherein the tray is located in the support (Figs. 2 and 3 show trays 1 located on the support platform), the tray is configured for the target object to be placed ([0032] “the tray 1 is piled with a plurality of work components”), the telescopic arm is located on the support (Figs. 2 and 3 shows the manipulator 10 is located on the support platform), and the telescopic arm is configured to push the target object placed on the tray out of the tray ([0032] “The picking robot 100 is used to pick up a work component from a first place such as a tray 1 and transfers the work component to a second place such as a palette 2 . Specifically, when the tray 1 is piled with a plurality of work components, the picking robot 100 uses a manipulator unit to pick up one work component selected from the plurality of work components piled on the tray 1 , and transfers the work component onto a portion of the palette 2 by setting a given orientation for the one work component, which can be performed automatically as one sequential operation.”); and
the detection sensor is configured to obtain image information of a target position within different image capturing ranges (Fig. 8; [0037] “The image capturing unit can be a stereo camera that acquires range information of image associating a position in the image capturing area and range information at the position”), and the target position comprises: ... a taking or placing position of the target object in the interaction mode ([0037] “The image capturing unit is used to acquire image information associating a position and properties at the position (e.g., distance, optical properties) in an image capturing area of the image capturing unit.”; [0056] “the pickup position and the pickup posture are calculated based on the position and posture of the work component W identified by the pattern matching. When the position and posture (orientation) of the plurality of work components W are identified by the pattern matching, for example, one work component W that satisfies the shortest distance condition to the stereo camera unit 40 , which is at the highest position of the plurality of work components W piled on the tray 1, is identified.”; [0057] “Specifically, based on the disparity image information, a face area having an area size that can be adsorbed by the work adsorption face of the hand 20 is identified, and then the pickup position and the pickup posture corresponding to the identified face area can be calculated.”).
PNG
media_image1.png
526
618
media_image1.png
Greyscale
It would have been obvious to one of ordinary skill in the art before he effective filing date of the claimed invention to modify the teachings of Asahara to include a support, a tray, and a telescopic arm, wherein the tray is located in the support, the tray is configured for the target object to be placed, the telescopic arm is located on the support, and the telescopic arm is configured to push the target object placed on the tray out of the tray or pull the target object onto the tray; and the detection sensor is configured to obtain image information of a target position within different image capturing ranges, and the target position comprises: a taking or placing position of the target object in the interaction mode, as taught by Kobayashi. Such modification allows the robot to perform work on the target object.
Kobayashi does not specifically teach the detection sensor is arranged below the tray.
However, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asahara to arrange the detection sensor on the manipulator mechanism, since it has been held that rearranging parts of an invention involves only routine skill in the art. In re Japikse, 86 USPQ 70.
Regarding claim 8, Asahara does not specifically teach wherein a photographing direction of the detection sensor is the same as a direction of stretching or retracting of the telescoping arm.
However, Kobayashi teaches wherein a photographing direction of the detection sensor is the same as a direction of stretching or retracting of the telescoping arm (Fig. 8; [0047] “The stereo camera unit 40 and the pattern projection unit 50 are disposed at the upper portion of the work processing space inside the picking robot 100 ... The stereo camera unit 40 captures an image of the work components W piled on the tray 1 and an intermediate tray 4 from the upper of the tray 1 and the intermediate tray 4.” – The photographing direction of the camera unit 40 is directed at the trays, which is the direction that the telescoping arm retracts or stretches to perform pickup operation).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Asahara, in view of Kobayashi, to configure the photographing direction of the detection sensor is the same as a direction of stretching or retracting of the telescoping arm, as taught by Kobayashi, in order to acquire additional information regarding the posture of the robot arm.
Regarding claim 9, Asahara further teaches:
wherein the detection sensor is one or more of a visual sensor, an optical sensor ([0027] “Each of the cameras 114 includes, for example, a CMOS image sensor,”), and an acoustic sensor ([0027] “A microphone 115 is also one of the sensors. The microphone 115 transmits a voice signal that it has acquired to the controller. “).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NHI Q BUI whose telephone number is (571)272-3962. The examiner can normally be reached Monday - Friday: 8:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KHOI TRAN can be reached at (571) 272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NHI Q BUI/Examiner, Art Unit 3656