DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Restriction to one of the following inventions is required under 35 U.S.C. 121:
I. Claims 1-15, drawn to a method of controlling a robot, classified in B25J9/1697.
II. Claims 16-18, drawn to a control system for controlling a physical robot and simulated robot, classified in G05B17/00.
The inventions are independent or distinct, each from the other because:
Inventions I and II are related as product and process of use. The inventions can be shown to be distinct if either or both of the following can be shown: (1) the process for using the product as claimed can be practiced with another materially different product or (2) the product as claimed can be used in a materially different process of using that product. See MPEP § 806.05(h). In the instant case the method of invention I can be practiced with a material different product, such as a controller that is only confi. Similarly, the controller of invention II can be used in a materially different process, such as controlling a simulated robot.
Restriction for examination purposes as indicated is proper because all the inventions listed in this action are independent or distinct for the reasons given above and there would be a serious search and/or examination burden if restriction were not required because one or more of the following reasons apply:
the inventions have acquired a separate status in the art in view of their different classification;
the inventions have acquired a separate status in the art due to their recognized divergent subject matter; and/or
the inventions require a different field of search (e.g., searching different classes/subclasses or electronic resources, or employing different search strategies or search queries).
Applicant is advised that the reply to this requirement to be complete must include (i) an election of an invention to be examined even though the requirement may be traversed (37 CFR 1.143) and (ii) identification of the claims encompassing the elected invention.
The election of an invention may be made with or without traverse. To reserve a right to petition, the election must be made with traverse. If the reply does not distinctly and specifically point out supposed errors in the restriction requirement, the election shall be treated as an election without traverse. Traversal must be presented at the time of election in order to be considered timely. Failure to timely traverse the requirement will result in the loss of right to petition under 37 CFR 1.144. If claims are added after the election, applicant must indicate which of these claims are readable upon the elected invention.
Should applicant traverse on the ground that the inventions are not patentably distinct, applicant should submit evidence or identify such evidence now of record showing the inventions to be obvious variants or clearly admit on the record that this is the case. In either instance, if the examiner finds one of the inventions unpatentable over the prior art, the evidence or admission may be used in a rejection under 35 U.S.C. 103 or pre-AIA 35 U.S.C. 103(a) of the other invention.
During a telephone message left by Adenike Adebiyi on 02/26/2026 a provisional election was made without traverse to prosecute invention I: method of controlling a robot, claims 1-15. Affirmation of this election must be made by applicant in replying to this Office action. Claims 16-18 are withdrawn from further consideration by the examiner, 37 CFR 1.142(b), as being drawn to a non-elected invention.
Status of Claims
This communication is a first office action, non-final rejection on the merits. Claims 1-15 as filed, are currently pending and have been considered below.
Specification
The disclosure is objected to because of the following informalities: unclear labeling.
Paragraph 0079 labels the “control-determining subsystem” both 120 and 130. Additionally, the “control-determining subsystem” and the “autonomous control subsystem” are both labeled 130. The examiner suggests labeling the “control-determining subsystem” ONLY as 120 and apply the label 130 ONLY to the “autonomous control subsystem” to maintain consistency with the drawings.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 13-15 are rejected under 35 U.S.C. 103 as being anticipated by Kojima et al. (K. Kojima, T. Karasawa, T. Kozuki, E. Kuroiwa, S. Yukizaki, S. Iwaishi, T. Ishikawa, R. Koyama, S. Noda, F. Sugai, S. Nozawa, Y. Kakiuchi, K. Okada, M. Inaba. "Development of life-sized high-power humanoid robot JAXON for real-world use," 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea (South), 2015, pp. 838-843, doi: 10.1109/HUMANOIDS.2015.7363459. hereinafter “Kojima”) in view of Ishiguro et al. (Y. Ishiguro, K. Kojima, F. Sugai, S. Nozawa, Y. Kakiuchi, K. Okada, M. Inaba. "High Speed Whole Body Dynamic Motion Experiment with Real Time Master-Slave Humanoid Robot System," 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 2018, pp. 5835-5841, doi: 10.1109/ICRA.2018.8461207. hereinafter “Ishiguro 2018”), and further in view of Ishiguro et al. (Y. Ishiguro, T. Makabe, Y. Nagamatsu, Y. Kojio, K. Kojima, F. Sugai, Y. Kakiuchi, K. Okada, M. Inaba. "Bilateral Humanoid Teleoperation System Using Whole-Body Exoskeleton Cockpit TABLIS," in IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 6419-6426, Oct. 2020, doi: 10.1109/LRA.2020.3013863. hereinafter “Ishiguro 2020”).
Regarding Claim 1, Kojima discloses in figure 1 and column 2 of page 839
A method of controlling a robot, the method comprising:
selecting, by a control-determining subsystem, which one of an autonomous control subsystem and a teleoperation control subsystem controls the robot, wherein the robot comprises a plurality of sensors configured to convert information from an environment and the robot into sensor data, wherein each sensor of the plurality of sensors generates a stream of raw sensor data having a data type, a size, and a frequency, and a plurality of actuators configured to cause movement of the robot
Kojima pertains to the development of the humanoid robot JAXON and details the plurality of sensors as “6-axes force sensors in each sole and hand, posture sensors (IMU) in waist link, and Multisense SL in head link”. It is understood by those of ordinary skill in the art that such sensor data inherently has a data type, a size, and a frequency. Kojima does not teach a control determining subsystem. However, Ishiguro 2018 discloses in column 2 page 5838 selecting, by a control-determining subsystem, which one of an autonomous control subsystem and a teleoperation control subsystem controls the robot. Ishiguro 2018 pertains to teleoperation and autonomous control of the JAXON robot. The determination to control the JAXON robot via teleoperation or autonomous control was described as “In the Fig.7, we disabled the COM movement and foot support change. On the other hand, we applied our proposed methods and the result is shown in the Fig.8.” It is not specified how exactly this determination was implemented. However, it is specified that the JAXON software system is constructed with RTM·ROS interoperation technology. It would have been obvious to one of ordinary skill in the art to encode an RTM·ROS control-determining subsystem to switch between teleoperation or autonomous control. Thus, it would have been known to those of ordinary skill in the art to combine the JAXON robot of Kojima with the teleoperation/autonomous control determining subsystem of Ishiguro 2018 to successfully control operation of a robot in an environment.
Kojima in view of Ishiguro 2018 is silent to the details of the autonomous control subsystem, the teleoperation control subsystem, and the sensor data types. However, Ishiguro 2020 discloses
when the autonomous control subsystem is selected:
receiving, by the autonomous control subsystem, the sensor data from the plurality of sensors;
generating, by the autonomous control subsystem, autonomous actuator data based on the sensor data; and
outputting autonomous actuator data to the plurality of actuators of the robot; and
Ishiguro 2020 details the autonomous control subsystem of the JAXON robot in figure 3 and column 1 of page 6423 “This subsystem monitors pref ZMP, pact ZMP, and ´pref DCM. Pref MP is a reference ZMP derived from the reference COM state pref G , ¨pref G . pact ZMP is an actual ZMP measured by the robot foot force sensors…. By monitoring these reference/actual variables, this subsystem automatically determines whether the swing foot should land on the ground or not.”
Ishiguro 2020 further discloses in figures 1, 11, and 17 as well as in column 1 of page 6425
when the teleoperation control subsystem is selected:
receiving, by the teleoperation control subsystem, the sensor data from plurality of sensors;
transmitting, by the teleoperation control subsystem, the sensor data to a human operator, and
generating, by the teleoperation control subsystem, teleoperation actuator data based on the sensor data; and
outputting, by the teleoperation control subsystem teleoperation actuator control signals to the plurality of actuators, wherein the teleoperation actuator data are generated according to input from the human operator; and
Ishiguro 2020 pertains to the TABLIS teleoperation control system for the JAXON robot and specifies how the sensor data from the JAXON robot is transmitted to the human operator in the TABLIS teleoperation system. Additionally, the TABLIS takes the input of the human operator and outputs control signals to the actuators of the JAXON robot.
Ishiguro 2020 further discloses in figure 17 and column 1 page 6425
wherein the autonomous control subsystem and the teleoperation control subsystem receive, from the robot, sensor data having the same data type, the same size, and the same frequency for each of the plurality of sensors; and
where the sensor data of the robot (slave lleg act pos Z, fig 17(b)) having the same type, size, and frequency is sent to both the automatic foot contact control subsystem and TABLIS teleoperation control subsystem. Thus, it would have been known to those of ordinary skill in the art to combine the JAXON robot of Kojima with the autonomous control subsystem and teleoperation control subsystem (with the same data type) of Ishiguro 2020 to successfully control operation of a robot either autonomously or through teleoperation, particularly while traversing a step up and step down.
Kojima does not teach the actuator data types. However, Ishiguro 2018 discloses in figures 2, 7, 8, and 11 and table 1
wherein the autonomous actuator data and the teleoperation actuator data have the same data type, the same size, and the same frequency for each of the plurality of actuators
In this experiment the researchers sent the joint angles and ref ZMP from the human teleoperator to the JAXON robot to produce the motions of figure 7 and the measured displacements of table 1 and figure 11. Researchers also had the autonomous control subsystem “COM movement and footwork” send joint angles and ref ZMP having the same data type, the same size, and the same frequency to the JAXON robot to produce the motions of figure 8 and the measured displacements of table 1 and figure 11.
Thus, it would have been known to those of ordinary skill in the art to further combine the JAXON robot of Kojima with the TABLIS teleoperation data type and control-determining system of Ishiguro 2018 and 2020 to successfully control operation of a robot in an environment. Having the data type of the TABLIS be the same as the data type of the autonomous control subsystem improves the performance of the JAXON, as the JAXON actuators only need to recognize and respond to a single data type. The addition of a control-determining system to the JAXON robot would allow the operator to easily choose between autonomous control or teleoperation.
Regarding Claim 13, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1, and Ishiguro 2020 further discloses wherein the teleoperation system is a low-level teleoperation (LLT) system including hardware for transmitting sensor data from the robot to the human operator as sensory information and hardware for tracking the motion of the human operator. The TABLIS teleoperation system includes LLT functionality (“While using a low-level abstract command, the operator inputs their feet/COM force/position directly” column 1 page 6420), hardware to provide sensor data from the JAXON robot to the human operator as sensory feedback in the TABLIS (“The hand end-effector has 6-DOF wrench feedback from the slave side, but the foot end-effector is controlled by the quasi-3D floor reproduction instead, and only the Fx and Fy components of force are directly feedbacked.” column 2 page 6423), and hardware for tracking motion of the human operator (figures 4, 8, and 9).
Regarding Claim 14, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1, and Kojima further discloses in column 2 of page 839 wherein the sensor data includes at least one of audio sensor data, joint position data , pressure data, force sensitive resistor data , mobile base wheel encoder data, inertial measurement unit data, and visual data where it is stated that the JAXON robot contains “6-axes force sensors in each sole and hand, posture sensors (IMU) in waist link, and Multisense SL in head link.”
Regarding Claim 15, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1, and Ishiguro 2020 further discloses in figures 3, 8 and column 2 page 6423 wherein the actuator data includes at least one of audio data, joint position data, impedance data, and mobile base motion data where the TABLIS actuator data consists of at least; effector pose, head pose, COM pose and the “synchronization of the 6-DOF force/position of the feet of the operator/robot.”
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 1 above, and further in view of Gu et al (U.S. Patent Application 20090089700 A1 hereinafter “Gu”).
Regarding Claim 2, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 but does not teach the simulated robot in the simulated environment. However, Gu teaches in figure 5 and 0045 wherein the robot is interchangeable between a physical robot in a physical environment (106) and a simulated robot in a simulated environment (108) wherein:
the plurality of sensors of the physical robot comprise a plurality of physical sensors generating physical sensor data from the physical environment (118), and the plurality of actuators of the physical robot comprises a plurality of physical actuators receiving physical actuator data (116);
the plurality of sensors of the simulated robot comprises a plurality of simulated sensors generating simulated sensor data from the simulated environment (128), and the plurality of actuators of the simulated robot comprises a plurality of simulated actuators receiving simulated actuator data (130); wherein:
Furthermore, Gu discloses in 0045 and 0047
each simulated sensor is analogous to a respective physical sensor wherein the simulated sensor data is approximately the same as the respective physical sensor data; and
each simulated actuator is analogous to a respective physical actuator wherein the simulated actuator data is approximately the same as the respective physical actuator data.
where the mapping and coupling of signals is discussed. A simulated sensor/actuator is mapped to a respective physical sensor/actuator when the data is approximately the same. Therefore, it would have been known to one of ordinary skill in the art to employ the autonomous and/or teleoperation robotic control of Ishiguro 2020 in the combined rejection of claim 1 with the physical and/or simulated robotic environment of Gu to achieve the proposed claim of autonomous and/or teleoperation robotic control of a physical and/or simulated robot in order to improve the training and reduce the wear and tear of robotic systems.
Claims 3-5 are rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 1 above, and further in view of Kakiuchi et al (Y. Kakiuchi, K. Kojima, E. Kuroiwa, S. Noda, M. Murooka, I. Kumagai, R. Ueda, F. Sugai, S. Nozawa, K. Okada, M. Inaba, "Development of humanoid robot system for disaster response through team NEDO-JSK's approach to DARPA Robotics Challenge Finals," 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, Korea (South), 2015, pp. 805-810, doi: 10.1109/HUMANOIDS.2015.7363446. hereinafter “Kakiuchi”)
Regarding Claim 3, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 and Kakiuchi discloses in figure 8 and column 2 of page 808 wherein the autonomous control subsystem controls the robot with fully autonomous control, wherein artificial intelligence determines the autonomous actuator data where the Field Computer layer provides autonomous control via EusLisp and ROS. The Field Computer layer of Kojima is a base device and the claimed use of artificial intelligence can be seen as an improvement upon the known programming technique of EusLisp and ROS. However, one of ordinary skill in the art would have recognized that applying artificial intelligence to the Field Computer of Kojima in the combined rejection of claim 1 would improve the system in a predictable manner.
Regarding Claim 4, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 and Kakiuchi discloses in figure 8 and column 1 of page 809 wherein the autonomous control subsystem controls the robot with semi-autonomous control, wherein a human operator determines the autonomous actuator data where it states “We aimed to develop tele-operation interface which can switch between autonomous robot behavior with operator’s suggestion and direct operation by a operator.”
Regarding Claim 5, Kojima in view of Ishiguro 2020, Ishiguro 2018 and Kakiuchi discloses all the limitations of claim 4 and Kakiuchi further discloses in figure 8 and column 1 of page 809
wherein the autonomous control subsystem includes a graphical user interface and an input device, and the method further comprises:
displaying the sensor data on the graphical user interface for viewing by the human operator; and
receiving instructions through the input device from the human operator
where it states “We aimed to develop tele-operation interface which can switch between autonomous robot behavior with operator’s suggestion and direct operation by a operator. Our user interface consists of a 2-D GUI, a 3-D GUI and input devices. They are in OCS layer.”
Claims 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020 and Ishiguro 2018 as applied to claim 1 above, and in further view of Tiwara et al (U.S. Patent Application 20180267558 hereinafter “Tiwara”).
Regarding Claim 6, Kojima in view of Ishiguro 2020 and Ishiguro 2018 discloses all the limitations of claim 1 but does not teach an autonomous control subsystem with a feature extraction module. However, Tiwara teaches
wherein the autonomous control subsystem includes a feature extraction module and the method further comprises:
receiving at least one sensor data stream of the sensor data from the robot by the feature extraction module; and
converting the at least one sensor data stream to features, wherein features are semantically meaningful information.
Tiwara pertains to the autonomous, semi-autonomous, or teleoperation of a vehicle and describes in 0052 “ In a specific example, the CPU 132 outputs an object localization dataset (e.g., foreign vehicle location, lane demarcation data, roadway shoulder location, etc.) computed from a data stream (e.g., video stream, image-like data stream, etc.) using a gradient-based feature extraction ruleset (e.g., histogram oriented gradients, edge-detection, etc.). However, the CPU 132 can output any suitable feature value.” Tiwara provides examples of suitable feature values, or semantically meaningful information in 0061 “extract features from the data (e.g., recognizing objects, classifying objects, tracking object trajectories, etc.)” Therefore, it would have been known to one of ordinary skill in the art to modify the autonomous control subsystem of Ishiguro 2020 in the combined rejection of claim 1 with the feature extraction module of Tiwara to obtain semantically meaningful information to aid the teleoperator in the remote control of a robot.
Regarding Claim 7, Kojima in view of Ishiguro 2020, Ishiguro 2018, and Tiwara discloses all the limitations of claim 6, and Tiwara further discloses in 0061 wherein the features includes at least one of: location of detected objects, orientation of detected objects, labels of detected objects, mapping of the environment, text extracted from speech, text extracted from visual feed, facial recognition labels, presence of hand in the scene, joint states for actuators of the robot, and faces in a field of view where the specific features of “recognizing objects, classifying objects, tracking object trajectories, etc.” are discussed.
Regarding Claim 8, Kojima in view of Ishiguro 2020, Ishiguro 2018, and Tiwara discloses all the limitations of claim 6, and Tiwara further discloses in 0054
wherein the feature extraction module comprises a plurality of specialized submodules and the method further comprises:
extracting a feature from the at least one sensor data stream by at least one specialized submodule.
Tiwara employs a confidence metric to apply a score to the specialized submodules of the feature extraction module that extract features from at least one sensor data stream, such as the imaging subsystem and/or ranging subsystem.
Regarding Claim 9, Kojima in view of Ishiguro 2020, Ishiguro 2018, and Tiwara discloses all the limitations of claim 8, and Tiwara further discloses in 0054
wherein the autonomous control subsystem further comprises an attention module and the method further comprises:
setting an on or off status at least one of the specialized submodules by the attention module
where a binary confidence metric score can be used to turn off/on at least one of the specialized submodules.
Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Kojima in view of Ishiguro 2020, Ishiguro 2018, Gu, and Tiwara as applied to claim 6 above, and further in view of Farlow et al (U.S. Patent Application 20110288684 hereinafter “Farlow”).
Regarding Claim 10, Kojima in view of Ishiguro 2020, Ishiguro 2018, and Tiwara discloses all the limitations of claim 6 but does not teach a robot-perceived model of the environment, a simulated environment, nor the sensor and actuator data types. However, Farlow teaches in figures 13 and 18B and 0183 generating, by the autonomous control subsystem (510), a robot-perceived model of the environment (1820) of the robot (100) based on the features extracted from the robot sensor data (400) by the feature extraction module(600d or “reasoning software” of 0115). Additionally, Farlow details in 0195 how remote operators can attend a training course using a simulated robot that roams a simulated space (using simulator 1670). Therefore, it would have been known to one of ordinary skill in the art to apply the feature extraction module of the autonomous control subsystem of Ishiguro 2020 in view of Tiwara to the actuator and sensor data of same type/size/frequency of Ishiguro 2018 and Ishiguro 2020 to obtain the robot-perceived model of the environment of Farlow in order to achieve the overarching goal of improving training and reducing the wear and tear of robotic systems.
As described with claim 2 above, Gu discloses wherein the data type, data size, and data frequency of the sensor data and the actuator data are the same for a physical environment, a simulated environment, but does not teach this for the robot-egocentric model. However, Farlow teaches and the robot-egocentric model (1820). Therefore, it would have been known to one of ordinary skill in the art to apply the feature same data type/size/frequency of the physical/simulated environment of Gu to the robot-perceived model of the environment of Farlow in order to achieve proper internal maps or ego-centric models during training sessions.
Regarding Claim 11, Kojima in view of Ishiguro 2020, Ishiguro 2018, Tiwara, and Farlow discloses all the limitations of claim 10, and Farlow further discloses in 0199 further comprising testing, by the autonomous control subsystem, actuator data within the robot-egocentric model to determine the effects of an actuator data driven action before sending the actuator data to the robot. The autonomous robot control of Farlow includes a simulator 1670 to determine the effects of actuator driven actions, particularly with path planning: “The simulator 1670 may allow debugging and testing of applications 1610 without connectivity to the robot 100. The simulator 1670 can model or simulate operation of the robot 100 without actually communicating with the robot 100 (e.g., for path planning and accessing map databases). For executing simulations, in some implementations, the simulator 1670 produces a map database (e.g., from a layout map 1810) without using the robot 100. This may involve image processing (e.g., edge detection) so that features (like walls, corners, columns, etc.) are automatically identified. The simulator 1670 can use the map database to simulate path planning in an environment dictated by the layout map 1810.”
Regarding Claim 12, Kojima in view of Ishiguro 2020, Ishiguro 2018, Tiwara, and Farlow discloses all the limitations of claim 10, and Farlow further discloses in 0172 wherein the autonomous control subsystem includes a concrete state representation updater and the method further comprises providing data about the current state of the remote environment, as understood by the autonomous control subsystem, by the concrete state representation updater. The autonomous robot control of Farlow includes autonomous navigation which stores and updates the current state of the remote environment as understood by the autonomous control subsystem in map 1820, which is a form of a concrete state representation.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
• US 5086400 discloses combined autonomous and teleoperation control
• US 10824142 discloses determining and switching between autonomous and teleoperation control
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nathan Daniel Neckel whose telephone number is (571)272-9537. The examiner can normally be reached M-F, 7-3.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at 571-270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NATHAN DANIEL NECKEL/Examiner, Art Unit 3656
/WADE MILES/Supervisory Patent Examiner, Art Unit 3656