Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The Office Action is in response to the application filed 01/12/2026. Claims 1-15 are presently pending and are presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/30/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s arguments, see page 6, filed 01/12/2026, with respect to the claim interpretation of claims 1, 14, and 15 under 35 U.S.C. 112(f) have been fully considered and are partly persuasive, partly unpersuasive. The amendments to the claims have overcome some of the claim interpretations under 35 U.S.C. 112(f), but not all of them, or created new claim interpretations as discussed in further detail below. The claim interpretation of claims 14 and 15 have been withdrawn, but not the claim interpretation of claim 1.
Applicant’s arguments, see page 6, filed 01/12/2026, with respect to the rejection of claims 1-15 under 35 U.S.C. 112(b) have been fully considered and are persuasive. The amendments to the claims have overcome the indefiniteness rejection. The rejection of claims 1-15 under 35 U.S.C. 112(b) has been withdrawn.
Applicant’s arguments, see pages 6-8, filed 01/12/2026, with respect to the rejection of the claims under 35 U.S.C. 101 have been fully considered and are persuasive. The amendments to the claims have overcome the 101 rejection. The rejection of the claims under 35 U.S.C. 101 has been withdrawn.
Applicant’s arguments, see pages 8-9, filed 01/12/2026, with respect to the rejection(s) of claim(s) 1-9 and 11-15 under Payton et al. US 20150336268 A1 (“Payton”) have been fully considered and are persuasive. The amendments to the claims have overcome the 102 rejection. Similarly, applicant’s arguments regarding the rejection of claim 9 under 35 U.S.C. 103 in view of Payton and Noda et al. US 20110288667 A1 (“Noda”) are persuasive, as Noda does not teach the elements of the amended claims 1, 14, and 15. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made under 35 U.S.C. 103 in view of Payton et al. US 20150336268 A1 (“Payton”) in combination with Hashimoto et al. US 20210003993 A1 (“Hashimoto”).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a command value generating device for generating command values” and “an acquisition section configured to acquire original command values” in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-9 and 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Payton et al. US 20150336268 A1 (“Payton”) in combination with Hashimoto et al. US 20210003993 A1 (“Hashimoto”).
Regarding Claim 1. Payton teaches a command value generating device for generating values for a robot configured to interact with a target object (In FIG. 1, the controller at 20 may include any required hardware and process instructions suitable for executing the present method at 100, and for outputting control signals (arrow CC) to the robot at 12 as needed, for instance a command to execute an autonomous task such as grasping and inserting the object at 18 into the socket at 23 as previously demonstrated by the operator at 13 [paragraph 28]), comprising:
a non-transitory memory storing instructions; and
one or more processors coupled to the non-transitory memory and configured, by executing the instructions, to cause the command value generating device to include:
an acquisition section configured to acquire original command values for execution of a task by the robot, and to acquire state data representing a state of the robot, the state data including at least action data representing an action of the robot, position/orientation data representing a relative position and a relative orientation between the robot and the target object, and external force data representing external force received by the target object during the task (During the task training phase, the operator provides a relatively small number of example task demonstrations of a desired task, with the term “small” as used herein meaning a total of no more than five task demonstrations in one embodiment, with no more than two or three demonstrations being sufficient in most instances [paragraph 21]. The controller captures the force-torque signals from the force and torque sensors as well as pertinent details of the position and orientation of the end effector at 21, e.g., via performance data. The performance data is collected by and output from one or more additional sensors, such as joint angle sensors, vision system sensors, point cloud cameras, and/or the like. The performance data may include data tracking of the movement of the end effector relative to the object. The combined set of collected data during task demonstration is referred to herein as the training data set [paragraph 20, FIG. 1]. Also of note, the term “internal parameter” is not defined in the specification of the present application, and while the term does appear in the specification in paragraph 110, the lack of specificity means that any parameter internal to the system in some way can read on the claim language); and
a generation section configured to, upon the robot being manually taught to perform the task, train a generator using the original command values, and corresponding state data acquired in response to the original command values being input to the robot to execute the task (The controller at 20 of FIG. 1 may include any required hardware and process instructions suitable for executing the method at 100, and for outputting control signals (arrow CC) to the robot as needed, for instance a command to execute an autonomous task [paragraph 28]. In Phase II of FIG. 5, the task execution phase, with only two or three training examples the robotic system of FIG. 1 will have obtained enough information to execute the demonstrated task. Event descriptors at 69 can be trained specifically to each transition, and these event descriptors will allow an event detector at 70 (ED), another computer or logic module, to determine when to trigger a behavior control module (BCM) at 86, e.g., behavior control logic and hardware of the controller, and thereby switch to a new control regime [paragraph 55, FIGS. 1 and 5]).
Payton does not teach:
The generator is implemented by a neural network; and
the generator configured to generate updated command values for execution of the task by the robot.
However, Hashimoto teaches:
The generator is implemented by a neural network; and
the generator configured to generate updated command values for execution of the task by the robot (FIG. 5 shows a functional block diagram of a control system for a skill transfer robot system. This includes a learning module at 52 including a neural network [paragraph 62], which is capable of then learning from manual performance of a motion by an operator [paragraph 71], and then an adjusting module at 51 that generates the instructions for robot motion [paragraph 118], which can be updated as shown in FIG. 5 by the learning module at 52).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Payton with the generator is implemented by a neural network; and the generator configured to generate updated command values for execution of the task by the robot as taught by Hashimoto so as to allow the system to update details regarding the execution of the task by the robot should the original values be insufficient.
Regarding Claim 2. Payton in combination with Hashimoto teaches the command value generating device of claim 1.
Payton also teaches:
wherein the generation section trains the generator by determining internal parameters of the generator based on optimization (In FIG. 5, in the Training Phase (Phase I) the controller 20 uses an optimization block (OB) 54. At this stage of logic, the controller 20 analyzes the demonstration data of a given task and produces the task primitive sequence (TPS) 58, i.e., the behavior primitives that are going to be used to imitate and further analyze the demonstrated task [paragraph 39]).
Regarding Claim 3. Payton in combination with Hashimoto teaches the command value generating device of claim 1.
Payton also teaches:
the command value generating device further includes
a reception section configured to receive select state data from all the state data acquired by the acquisition section and send the selected state data to the generation section; and
the generation section trains the generator using the selected state data (The controller in FIG. 1 receives the training data set 11T and performs a segmentation operation on the received training data set 11T via a Primitive Segmentation Module (PSM) 52. Such an operation may include dividing a time-sequence of the demonstrated task into distinct segments of activity, where each resultant segment corresponds to a single control mode or “task primitive” of the robot shown in FIG. 1. Segmentation may result in task segments S1, S2, S3, and S4, after having determined the segments of activity as in FIG. 3, which collectively define a task primitive sequence (TPS) 58, the controller 20 of FIG. 1 next analyzes these segments to identify transition events between the segments [paragraphs 23-26]).
Regarding Claim 4. Payton in combination with Hashimoto teaches the command value generating device of claim 3.
Payton also teaches:
wherein:
the reception section receives a selection of a type of the state data to be used, as the selected state data, for training of the generator from among the state data of a plurality of types acquired by the acquisition section; and
the generation section trains the generator by optimizing internal parameters for generating updated command values capable of reproducing a state represented by the selected state data of the selected type, based on the selected state data of the selected type and the original command values (In FIG. 5, in the Training Phase, the controller uses an optimization block. At this stage of logic, the controller analyzes the demonstration data of a given task and produces the task primitive sequence [paragraphs 39-42, FIGS. 4C and 5], which reads on the reception section receiving a selection of a type of data and the generation section generating the generator by optimizing a parameter for generating command values, specifically describing the optimization block at 54 being a variant of iterated hill climbing optimization, a process well known in the art [paragraph 40]).
Regarding Claim 5. Payton in combination with Hashimoto teaches the command value generating device of claim 4.
Payton also teaches:
wherein the generation section receives correction to the internal parameters of the generator (The command generation section takes into account a correction parameter and, eventually, an upper limit as well as a goal value to the terms used in the optimization process [paragraph 37]).
Regarding Claim 6. Payton teaches the command value generating device of claim 4.
Payton also teaches:
wherein:
the internal parameters of the generator include an upper limit value for each command value, and a goal value of an action for each command value; and
the generation section trains the generator by fixing the upper limit value and the goal value of each updated command value to specified values and optimizing other internal parameters (The term “constraint force movement” refers to a hybrid force-position value that has one direction along which to move and another direction in which to maintain a constant constraint force. This is an appropriate primitive for imitating sliding or dragging behaviors along surfaces. Position and orientation parameters are extracted for this primitive in the same manner as occurs in the free movement parameter [paragraph 37]. Additionally, Phase II of FIG. 5, the task execution phase, operates by repeating the previously learned sequence of control primitives, and then using learned force-torque constraints and objectives for these primitives [paragraph 55], which means that the upper limit value and goal value are specified values by the generation section/generator, which also optimizes other parameters [paragraph 39-42]).
Regarding Claim 7. Payton teaches the command value generating device of claim 1.
Payton also teaches:
wherein the command value generating device further includes an instruction section configured to determine whether or not the robot is operable based on a command value generated in a case in which the state data having a perturbation term included in one of the internal parameters that possibly fluctuates in the task has been input to the generator trained by the generation section, and in a case in which the robot is determined not operable, instructs the acquisition section to acquire original command values and corresponding state data generated in a case in which the perturbation term has been included (The salient point detector 62 in Phase I of FIG. 5 operates by finding time points in the training data set 11T where an event is more likely to occur. These time points are marked as salient points in logic of the controller 20. The more time points that are marked as a salient points, the longer the required training time [paragraph 46, FIG. 5]).
Regarding Claim 8. Payton teaches the command value generating device of claim 1.
Payton also teaches:
wherein the generation section performs at least one of removing part of the state data used for generation of the generator or adding state data newly acquired by the acquisition section, and then re-executes generation of the generator (The human operator at 13, whose arm alone is shown in FIG. 1 for illustrative simplicity, can demonstrate a new force-torque work task to the robot simply by demonstrating the task [paragraph 19]).
Regarding Claim 9. Payton teaches the command value generating device of claim 1.
Payton also teaches:
wherein:
the acquisition section acquires an image in which a work area including the target object has been captured during teaching, and
the command value generating device further comprises a setting section that sets a parameter to recognize the work area based on the image acquired by the acquisition section (The performance data (arrow 27) is collected by and output from one or more additional sensors 25, such as joint angle sensors, vision system sensors, point cloud cameras, and/or the like. The performance data (arrow 27) may include data tracking of the movement of the end effector 21 relative to the object 18. The combined set of collected data during task demonstration is referred to herein as the training data set (arrow 11T) [paragraph 22]).
Regarding Claim 11. Payton teaches the command value generating device of claim 9.
Payton also teaches:
wherein manual teaching of an action of the robot is executed by direct teaching, remote operation from a controller, or remote operation using a teaching machine connected to the robot by bilateral control (A human operator 13, whose arm alone is shown in FIG. 1 for illustrative simplicity, can demonstrate a new force-torque work task to the robot 12 simply by demonstrating the task [paragraph 19]. Linear forces and torques, i.e., rotational forces, applied by the robot 12 via the operator 13 are measured during task demonstration by one or more force-torque sensors positioned at or near the robot’s end effector, such as embedded within a wrist. Task demonstration by backdriving the robot 12 in this manner allows the operator 13 to feel and apply appropriate forces and torques in the demonstration of the task, such as the grasping, rotation, and placement of the object 18 with respect to the fixture 14, e.g., insertion of the object 18 into a socket 23 of the fixture 14 [paragraph 20]).
Regarding Claim 12. Payton teaches the command value generating device of claim 1.
Payton also teaches:
wherein the command value generating device further includes a control section that controls the robot by outputting the updated command values generated by the generator (The controller 20 may include any required hardware and process instructions suitable for executing the present method 100, and for outputting control signals (arrow CC) to the robot 12 as needed, for instance a command to execute an autonomous task such as grasping and inserting the object 18 into the socket 23 as previously demonstrated by the operator 13 [paragraph 28, FIG. 1]).
Regarding Claim 13. Payton teaches the command value generating device of claim 12.
Payton also teaches:
wherein the command value generating device further includes a detection section that detects an abnormality occurring during a task performed by the robot by inputting command values generated by the generator into a generator for back calculation to estimate state data, and comparing the state data with estimated against the state data acquired by the acquisition section (Equation 2 shows how the total geometric error between the position and orientation and their linear approximations is calculated, plus the deviation from the constraints force [paragraph 37]. Additionally, in paragraph 38, the deviation from the goal force during the end of the movement are calculated from equation 3. Both of these read on detecting an abnormality occurring during a task performed by the robot).
Regarding Claim 14. Payton teaches a command value generating method (In FIG. 1, the controller at 20 may include any required hardware and process instructions suitable for executing the present method at 100, and for outputting control signals (arrow CC) to the robot at 12 as needed, for instance a command to execute an autonomous task such as grasping and inserting the object at 18 into the socket at 23 as previously demonstrated by the operator at 13 [paragraph 28]), comprising:
acquiring original command values for execution of a task by a robot, and acquiring state data representing a state of the robot, the state data including at least action data representing an action of the robot, position/orientation data representing a relative position and a relative orientation between the robot and the target object, and external force data representing external force received by the target object during the task (During the task training phase, the operator provides a relatively small number of example task demonstrations of a desired task, with the term “small” as used herein meaning a total of no more than five task demonstrations in one embodiment, with no more than two or three demonstrations being sufficient in most instances [paragraph 21]. The controller captures the force-torque signals from the force and torque sensors as well as pertinent details of the position and orientation of the end effector at 21, e.g., via performance data. The performance data is collected by and output from one or more additional sensors, such as joint angle sensors, vision system sensors, point cloud cameras, and/or the like. The performance data may include data tracking of the movement of the end effector relative to the object. The combined set of collected data during task demonstration is referred to herein as the training data set [paragraph 20, FIG. 1]. Also of note, the term “internal parameter” is not defined in the specification of the present application, and while the term does appear in the specification in paragraph 110, the lack of specificity means that any parameter internal to the system in some way can read on the claim language); and
generating, upon the robot being manually taught to perform the task, train a generator using the original command values, and corresponding state data acquired in response to the original command values, and corresponding state data acquired in response to the original command values being input to the robot to execute the task (The controller at 20 of FIG. 1 may include any required hardware and process instructions suitable for executing the method at 100, and for outputting control signals (arrow CC) to the robot as needed, for instance a command to execute an autonomous task [paragraph 28]. In Phase II of FIG. 5, the task execution phase, with only two or three training examples the robotic system of FIG. 1 will have obtained enough information to execute the demonstrated task. Event descriptors at 69 can be trained specifically to each transition, and these event descriptors will allow an event detector at 70 (ED), another computer or logic module, to determine when to trigger a behavior control module (BCM) at 86, e.g., behavior control logic and hardware of the controller, and thereby switch to a new control regime [paragraph 55, FIGS. 1 and 5]).
Payton does not teach:
The generator is implemented by a neural network; and
the generator configured to generate updated command values for execution of the task by the robot.
However, Hashimoto teaches:
The generator is implemented by a neural network; and
the generator configured to generate updated command values for execution of the task by the robot (FIG. 5 shows a functional block diagram of a control system for a skill transfer robot system. This includes a learning module at 52 including a neural network [paragraph 62], which is capable of then learning from manual performance of a motion by an operator [paragraph 71], and then an adjusting module at 51 that generates the instructions for robot motion [paragraph 118], which can be updated as shown in FIG. 5 by the learning module at 52).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Payton with the generator is implemented by a neural network; and the generator configured to generate updated command values for execution of the task by the robot as taught by Hashimoto so as to allow the system to update details regarding the execution of the task by the robot should the original values be insufficient.
Regarding Claim 15. Payton teaches a non-transitory storage medium storing a command value generating program (In FIG. 1, the controller at 20 may include any required hardware and process instructions suitable for executing the present method at 100, and for outputting control signals (arrow CC) to the robot at 12 as needed, for instance a command to execute an autonomous task such as grasping and inserting the object at 18 into the socket at 23 as previously demonstrated by the operator at 13 [paragraph 28]. The controller 20 may be embodied as one or multiple digital computers or host machines each having one or more processors (P) and memory (M), i.e., tangible, non-transitory memory such as optical or magnetic read only memory (ROM), as well as random access memory (RAM), electrically-programmable read only memory (EPROM), etc.) that causes a computer to function as:
acquires original command values for execution of a task by a robot, and acquiring state data representing a state of the robot, the state data including at least action data representing an action of the robot, position/orientation data representing a relative position and a relative orientation between the robot and the target object, and external force data representing external force received by the target object during the task (During the task training phase, the operator provides a relatively small number of example task demonstrations of a desired task, with the term “small” as used herein meaning a total of no more than five task demonstrations in one embodiment, with no more than two or three demonstrations being sufficient in most instances [paragraph 21]. The controller captures the force-torque signals from the force and torque sensors as well as pertinent details of the position and orientation of the end effector at 21, e.g., via performance data. The performance data is collected by and output from one or more additional sensors, such as joint angle sensors, vision system sensors, point cloud cameras, and/or the like. The performance data may include data tracking of the movement of the end effector relative to the object. The combined set of collected data during task demonstration is referred to herein as the training data set [paragraph 20, FIG. 1]); and
generating, upon the robot being manually taught to perform the task, train a generator using the original command values, and corresponding state data acquired in response to the original command values, and corresponding state data acquired in response to the original command values being input to the robot to execute the task (The controller at 20 of FIG. 1 may include any required hardware and process instructions suitable for executing the method at 100, and for outputting control signals (arrow CC) to the robot as needed, for instance a command to execute an autonomous task [paragraph 28]. In Phase II of FIG. 5, the task execution phase, with only two or three training examples the robotic system of FIG. 1 will have obtained enough information to execute the demonstrated task. Event descriptors at 69 can be trained specifically to each transition, and these event descriptors will allow an event detector at 70 (ED), another computer or logic module, to determine when to trigger a behavior control module (BCM) at 86, e.g., behavior control logic and hardware of the controller, and thereby switch to a new control regime [paragraph 55, FIGS. 1 and 5]).
Payton does not teach:
The generator is implemented by a neural network; and
the generator configured to generate updated command values for execution of the task by the robot.
However, Hashimoto teaches:
The generator is implemented by a neural network; and
the generator configured to generate updated command values for execution of the task by the robot (FIG. 5 shows a functional block diagram of a control system for a skill transfer robot system. This includes a learning module at 52 including a neural network [paragraph 62], which is capable of then learning from manual performance of a motion by an operator [paragraph 71], and then an adjusting module at 51 that generates the instructions for robot motion [paragraph 118], which can be updated as shown in FIG. 5 by the learning module at 52).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Payton with the generator is implemented by a neural network; and the generator configured to generate updated command values for execution of the task by the robot as taught by Hashimoto so as to allow the system to update details regarding the execution of the task by the robot should the original values be insufficient.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over Payton et al. US 20150336268 A1 (“Payton”) in combination with Hashimoto et al. US 20210003993 A1 (“Hashimoto”) as applied to claim 9 above, and further in view of Noda et al. US 20110288667 A1 (“Noda”).
Regarding Claim 10. Payton in combination with Hashimoto teaches the command value generating device of claim 9.
Payton does not teach:
wherein the acquisition section acquires a distance between a camera for capturing the image and the target object as computed based on a pre-set size of the target object and on a size on an image of the target object as recognized in the image.
However, Noda teaches:
wherein the acquisition section acquires a distance between a camera for capturing the image and the target object as computed based on a pre-set size of the target object and on a size on an image of the target object as recognized in the image (The distance to and the attitude with respect to an intended object are measured in real time using the finger-eye-camera measurement section 32 and the three-dimensional recognition section 33, and the task is carried out according to sensor feedback control in order to absorb a variation of tolerance in part dimension and a variation in positioning, thereby realizing a stable operation [paragraph 102]).
It would have been obvious to one of ordinary skill in the art at the time the invention was filed to modify the invention of Payton with wherein the acquisition section acquires a distance between a camera for capturing the image and the target object as computed based on a pre-set size of the target object and on a size on an image of the target object as recognized in the image as taught by Noda so as to allow the system to measure the distance between the camera or similar visual sensor and the target object.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON G CAIN whose telephone number is (571)272-7009. The examiner can normally be reached Monday: 7:30am - 4:30pm EST to Friday 7:30pm - 4:30am.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at (571) 270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON G CAIN/Examiner, Art Unit 3656