Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 5, 10-13, 16, 18-21, is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Peng (DeepLoco: Dynamic Locomotion Skills Using Hierarchical Deep Reinforcement Learning).
Regarding claims 21, 12, 1,
Peng teaches:
A system, comprising: one or more memories storing instructions; and one or more processors that are coupled to the one or more memories (Peng 7 Results “All computations are performed on the CPU” 4 Policy Representation and Learning “T tuples are stored in an experience replay memory D”) and, when executing the instructions, are configured to: receive a state of the character, a path to follow, and first information about a scene; (Peng 1 Introduction “A low-level controller (LLC) is desired at a fine timescale, where the goal is predominately about balance and limb control. At a larger timescale, a high-level controller (HLC) is more suitable for guiding the movement to achieve longer-term goals, such as anticipating the best path through obstacles. In this paper, we leverage the capabilities of deep reinforcement learning (RL) to learn control policies at both timescales. The use of deep RL allows skills to be defined via objective functions, while enabling for control policies based on high-dimensional inputs, such as local terrain maps or other abundant sensory information.” Peng 5 Low Level Controller “LLC State: The LLC input state slat, shown in Figure 3 (left), consists mainly of features describing the character’s configuration. These features include the center of mass positions of each link relative to the character’s root, designated as the pelvis, their relative rotations with respect to the root expressed as quaternions, and their linear and angular velocities. Two binary indicator features are included, corresponding to the character’s feet. The features are assigned 1 when their respective foot is in contact with the ground and 0 otherwise.” 6.3 HLC Tasks “Path Following: In this task an HLC is trained to navigate narrow paths carved into rocky terrain. A random path … is embedded into terrain generated using Perlin Noise … Since the policy is not provided with an explicit parameterization of the path as input, it must learn to recognize the path from the terrain map T and plan its footsteps accordingly.” Note: Peng teaches a method that leverages two controllers that act as one, a high level and low-level controller (HLC & LLC) both controlling a single character. Peng teaches the LLC is aware of the character’s current state which includes things like center of mass, character configuration, which feet are on the ground, etc… Peng also teaches that a path is specified for the character to take, and the understanding of the provided path in the terrain is handled by the HLC. The scene information is also known as it is taught that terrain maps and sensory information are known to the controllers.) generating, via a trained machine learning model (Peng Abstract “Both levels of the control policy are trained using deep reinforcement learning.”) and based on the state of the character, the path, and the first information, a first action for the character to perform, (Peng 3 Overview “Together, the HLC and LLC form a two-level control hierarchy where the HLC processes the high-level task goals gH and provides the LLC with low-level intermediate goals gL that direct the character towards fulfilling the overall task objectives … The inputs to the HLC consist of the state, sH , and the highlevel goal, gH , as specified by the task. It outputs an action, aH , which then serves as the current goal gL” 2 Related Work “The LLC receives the state, sL, and an intermediate goal, gL, as specified by the HLC, and outputs an action aL.” 6.3 HLC Tasks “The HLC goal gH = (θtar,dtar ) is represented by the direction to the target θtar relative to the character’s facing direction, and the distance dtar to the target on the horizontal plane. Since the policy is not provided with an explicit parameterization of the path as input, it must learn to recognize the path from the terrain map T and plan its footsteps accordingly.” Note: Peng teaches that with a trained machine learning model specific actions for the character to perform next are determined. Peng teaches that the state of the character and its goals are immediate things considered when determining an action. When referring to goals, Peng gives the example of the high-level goal that specifies the direction and distance for the character to move in. These goal variables are determined by the path and the terrain, the state of the character is considered as well to determine a next action. This teaches the claims language that a first action is chosen based on the state of the character, the path, and the first information.) wherein the first action comprises a first type of motion included in a plurality of types of motions for which the trained machine learning model is trained to generate actions; and causing the character to perform the first action. (Peng 5 Low-Level Controller “The low-level controller LLC is responsible for coordinating joint torques to mimic the overall style of a reference motion while satisfying footstep goals and maintaining balance … Each footstep plan gL = (pˆ0,pˆ1, ˆθroot ), as shown in Figure 4, specifies the 2D target position pˆ0 relative to the character on the horizontal plane for the swing foot at the end of the next step, as well as the target location for the following step pˆ1 … The action aL produced by the LLC specifies target positions for PD controllers positioned at each joint.” Note: Peng teaches that smaller actions like coordinating joint positions and performing motions are done to accomplish larger actions the model plans like taking footsteps towards a specific direction to follow a path.)
Regarding claims 2, 13
Peng teaches:
The computer-implemented method of claim 1, further comprising performing one or more operations to compute the path through at least a portion of the scene. (6 High Level Controller “The HLC is therefore responsible for planning and steering the agent along a particular path … This environment is a variant of the pillar obstacles environment, where the obstacles consist of large blocks with side lengths varying between 0.5 m and 7 m. The policy therefore must learn to navigate around large obstacles to find paths leading to the target location.” Note: Peng directly teaches that a path is planned, or computed, by the HLC, allowing it to move through the terrain/scene.)
Regarding claim 5, 16
Peng teaches:
The computer-implemented method of claim 2, further comprising: generating, via the trained machine learning model and based on a state of the character subsequent to performing the first action, another path, and the first information about the scene, a second action for the character to perform; and causing the character to perform the second action. (Peng 6 High Level Controller “The HLC is therefore responsible for planning and steering the agent along a particular path. When the agent reaches the target, the target location is randomly changed.” Note: Peng teaches that once a first path has been completed and the target is reached, a new target is produced restarting the process where a new path is calculated. The action process performed which allows the model to traverse the path is described previously.)
Regarding claim 10,
Peng teaches:
The computer-implemented method of claim 1, wherein the first information comprises a height map. (Peng 7.2 HLC Performance “The HLC’s for the path following, pillar obstacles, and block obstacles tasks all learned to identify and avoid obstacles using heightmaps and navigate across different environments seeking randomly placed targets.” Note: Peng teaches that information considered uses a height map.)
Regarding claim 11, 19,
Peng teaches:
The computer-implemented method of claim 1, wherein the character is one of a three-dimensional (3D) virtual character or a physical robot. (Peng 1 Introduction “Our principal contribution is to demonstrate that environment aware 3D bipedal locomotion skills can be learned” 3 Overview “Finally, the physics simulation is performed at 3 kHz.” Note: Peng teaches that its character is a biped in a 3D environment, or physics simulation.)
Regarding claim 18,
Peng teaches:
The one or more non-transitory computer-readable media of claim 12, wherein the plurality of types of motions include at least one of walking, running, crouch-walking, crawling, skipping, or standing. (Peng 7 Results “LLC reference motions: We train controllers using a single planar keyframed motion cycle as a motion style to imitate, as well as a set of ten motion capture steps that correspond to approximately 7 s of data from a single human subject. The clips consist of walking motions with different turning rates” 5.6 Style Modification “In addition to imitating a reference motion, the LLC can also be stylized by simple modifications to the reward function. In the following examples, we consider the addition of a style term cstyle where cstyle provides an interface through which the user can shape the motion of the LLC … Forward/Sideways Lean … Straight Leg(s) … High-Knees” Note: Peng teaches that the motions the model can perform include walking, high knee running, and a variety of other types of motion, teaching a plurality of motions that include at least one of the mentioned motions in the claim.)
Regarding claim 20,
Peng teaches:
The one or more non-transitory computer-readable media of claim 12, wherein the character is caused to perform the action in at least one of a simulation environment, a game environment, or a physical environment. (Peng 3 Overview, cited previously, defines its environment as a physics simulation.)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 3, 7, 14, are rejected under 35 U.S.C. 103 as being unpatentable over Peng (DeepLoco: dynamic locomotion skills using hierarchical deep reinforcement learning) in view of Karim (Procedural Locomotion of Multi-Legged Characters in Dynamic Environments).
Regarding claim 3, 14
Peng teaches:
The computer-implemented method of claim 2, wherein performing the one or more operations to compute the path comprises: performing one or more operations to solve for a two-dimensional (2D) path through the scene; (Peng 9 Conclusions “We would like to explore how to best learn to walk and run directly over 3D terrains, whereas our current control tasks navigate on flat paths through 3D terrains.” 6 High Level Controller citations above detail path solving/computing Note: Peng teaches that all the paths it develops are flat, or 2D, maps although they exist in a 3D environment and terrain. ) performing one or more operations to refine at least one portion of the 2D path based on one or more heights of one or more obstacles within the scene; (Peng 7.2 HLC Performance “The HLC’s for the path following, pillar obstacles, and block obstacles tasks all learned to identify and avoid obstacles using heightmaps and navigate across different environments seeking randomly placed targets.” Note: Peng teaches that a refined, more optimized path is created by considering the heights of obstacles present in the scene when planning a path.) and performing one or more operations to compute one or more velocities of the character along the path. (6 High Level Controller “Since the policy is not provided with an explicit parameterization of the path as input, it must learn to recognize the path from the terrain map T and plan its footsteps accordingly. The reward for this task is designed to encourage the character to move towards the target at a desired speed where vcom is the agent’s centre of mass velocity on the horizontal plane, and utar is a unit vector on the horizontal plane pointing towards the target. vˆcom = 1 m/s specifies the desired speed at which the character should move towards the target” Note: Peng teaches that the velocity of the character is computed, and that adjustments are made to keep a specific target velocity as the character moves along the path to its target.)
While Peng teaches the computing of a path, consideration of height to determine how to handle obstacles, and the computing of velocity for its character along the 2D path it does not teach the use or computation of a 3D path. Doing so is taught in Karim which teaches one or more operations to refine at least one portion of the 2D path based on one or more heights of one or more obstacles within the scene to generate a three-dimensional (3D) path; (Karim Abstract “The system consists of several independent blocks: a Character Controller, a Gait/Tempo Manager, a 3D Path Constructor and a Footprints Planner. The four modules work cooperatively to calculate in real-time the footprints and the 3D trajectories of the feet and the pelvis.” 4 3D path Construction “The elevations map contains the elevation of the highest obstacle in each cell …These cells are used to build the 2D path represented by a parametric Hermite curve as shown on Figure 10)(a). We sample this 2D curve and elevate it in 3D using the elevations map. Resulting waypoints are used as control points to define a 3D Hermite curve: this curve represents our 3D trajectory through the environment” Note: Karim directly teaches that 3D paths are computed, and that the 3D path specifically comes out of a 2D path that is combined with height information detailing obstacle heights to create a 3D path.) and performing one or more operations to compute one or more velocities of the character along the 3D path. (Karim 1 Introduction “Finally, our system generates the pelvis trajectory according to the overall desired direction, speed, the actual context of the environment and the feet positions and feedbacks.” Note: Karim teaches the ability to measure the speed and direction, or velocity, of characters in motion, allowing one to compute velocities while the character is moving along the path.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Peng with Karim where a model that creates a 2D path not only refines it with obstacle height data and finds character velocity but also creates a 3D path using height data.
There are several reasons that would motivate one to do so, one of which is more accurate motion planning. If a character is aware of the specific 3D terrain along a path it must follow it would be easier to map things like foot placement and center of mass when considering more information.
Regarding claim 7,
Peng teaches:
The computer-implemented method of claim 2,
Peng does not teach the weighting or scoring of height differences for its potential path in an effort to avoid traversing larger unnecessary heights, doing so is taught in Karim which teaches wherein the one or more operations to solve for the 2D path negatively weight larger height differences along the 2D path. (Karim 5.1 Potential Footprings “For each potential target, we calculate a score based on several criteria’s: distance to the preferred footprint, difference of elevation between the footprint and the surrounding cells, leading or not to a feet crossing, etc. The combination of all these criteria’s provides the final cell’s score (footprint score) that we normalize between 0 and 1”
PNG
media_image1.png
360
422
media_image1.png
Greyscale
5.2 Best Pair of Footprint and Path “We generate for each created 2D/3D trajectory a score based on the application specifications, like the total length of the path, the acceleration and the curve tangents profile, etc. For instance, the system shown in the accompanying video prefers trajectories that are more straight (direct) with less curvature, a preference observed in biomechanics as it minimizes the energy cost” Note: Here Karim teaches that a score is assigned based in part on the difference in elevation, where the more elevation that is present in a cell (section of environment in Karim) the more negatively weighted it is with a score closer to 0. As seen in Fig. 13 Potential paths in pink that go over a high elevation point are not chosen over a path that goes around the obstacle, having lower elevation. It is explicitly stated that these scores are not just for footprints but for 2D trajectories, or paths, the character takes.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Peng with Karim where Peng’s heightmap is further used to guide its path making where higher elevation areas are weighted more negatively than routes with a smaller difference in elevation, encouraging path planning that avoids high elevation terrain.
There are several reasons that would motivate one to do so, one of which is obstacle and difficult terrain avoidance. Peng already uses its heightmap to identify obstacles to avoid when planning its path, Peng could further avoid issues by negatively weighting high elevation areas which may be difficult to climb and traverse for the character as opposed to a flatter surface.
Claims 4, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Peng (DeepLoco: dynamic locomotion skills using hierarchical deep reinforcement learning) in view of Shah (LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action).
Regarding claim 4, 15
Peng teaches:
The computer-implemented method of claim 2, wherein the one or more operations to compute the path are based on second information about the scene, and wherein the second information indicates at least one of one or more heights within the scene, one or more landmarks within the scene, or one or more obstacles within the scenes. (Peng 7.2 HLC Performance details how “landmarks” like pillars, obstacles, and their heights along with terrain height information is used to compute a path.)
Peng does not teach however the use of textual information in computing a path, the use of textual information, or text instructions, to guide path making is taught in Shah. Specifically, Shah teaches one or more operations to compute the path are based on a textual instruction (Shah 1 Introduction “Given free-form textual instructions, we use a pre-trained large language model (LLM: GPT-3 [12]) to decode the instructions into a sequence of textual landmarks … A novel search algorithm is then used to maximize a probabilistic objective, and find a plan for the robot … Our primary contribution is Large Model Navigation, or LM-Nav, an embodied instruction following system that combines three large independently pre-trained models — a self-supervised robotic control model” 5.2 Following Instructions With LM-NAV “LM-Nav’s ability to parse complex instructions with multiple landmarks specifying the route — despite the possibility of a shorter route directly to the final landmark that ignores instructions, the robot finds a path that visits all of the landmarks in the correct order.” Note: Shah teaches the ability for textual instruction, or language prompts, to be provided to the machine learning model controlling the character which in this case is a physical robot. It is taught that a path based on the information in textual instructions is made and taken by the character.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Peng with Shah where aside from considering obstacles, landmarks, and height information when making paths textual instructions provided also determine the computation of a path.
There are several reasons that would motivate one to do so, one is the ability to have a model perform more complex tasks describable in textual instructions. Peng takes efforts to allow its model to handle complex issues like obstacles and non-flat terrain in its path planning, the ability to further upgrade the model’s capability with the teachings of Shah where a path could be planned according to written instructions would have been obvious.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Peng (DeepLoco: dynamic locomotion skills using hierarchical deep reinforcement learning) in view of Anguelov (Collision Avoidance for Preplanned Locomotion).
Regarding claim 6,
Peng teaches:
The computer-implemented method of claim 2, further comprising, responsive to determining that the character (i) has reached a goal, performing one or more operations to compute another path through the scene. (Peng 6 High Level Controller, specifically the citation for the previous claim, details how once a goal or target is reached Peng will have its model compute another path to a new target.)
Peng does however teach the explicit anticipation of collision to change its planned path, doing so is taught in Anguelov, which teaches responsive to determining that the character (ii) will collide with an obstacle based on an estimated movement of the obstacle and the path, performing one or more operations to compute another path through the scene. (Anguelov 22.2 Collision Avoidance for Preplanned Locomotion “The first stage and the core of our avoidance approach is the collision detection mechanism. Our characters (now termed agents) are modeled as collision spheres with a fixed collision radius, and the premise behind the collision detection system is to simply slide our spheres along our paths and check whether they make it to the end of their paths without colliding with any other spheres. During the frame update of each agent’s animation/locomotion programs, the agent will query the avoidance system to see whether its current path is collision-free. The avoidance system will perform a collision detection pass as well as attempt to trivially resolve a detected collision.” Note: Anguelov directly teaches that collision that may occur on a path is determined and the path is re computed to account for this.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Peng with Anguelov where a different path is not just computed upon finishing one, but is also computed to avoid obstacles.
There are several reasons that would motivate one to do so, Peng has already been shown to plan its initial path around obstacles to avoid them, so the idea to check the path to ensure no collisions will occur and reroute accordingly would be obvious.
Claims 8, 9, are rejected under 35 U.S.C. 103 as being unpatentable over Peng (DeepLoco: dynamic locomotion skills using hierarchical deep reinforcement learning) in view of Tao (Learning to Get Up).
Regarding claim 8,
Peng teaches:
The computer-implemented method of claim 1, further comprising performing one or more operations to train a first machine learning model to generate the trained machine learning model based on at least one of (i) a first reward based on a displacement between one or more actions generated by the first machine learning model and one or more paths included in training data (Peng Abstract “Both levels of the control policy are trained using deep reinforcement learning.” 6 High-Level Controller “The HLC output action aH specifies a footstep plan gL for the LLC” 5 Low-Level Controller “LLC Goal: Each footstep plan gL = (pˆ0,pˆ1, ˆθroot ), as shown in Figure 4, specifies the 2D target position pˆ0 relative to the character on the horizontal plane for the swing foot at the end of the next step, as well as the target location for the following step pˆ1.” Note: Peng teaches that its low-level controller’s goal is to move the character such that it takes an immediate forward step, then relative to that step and new location takes a second step forward. These individual planned footsteps make up a set of footstep plans which are created by the HLC, and guide the character along its planned path. Since rewards are given based on goals, Peng teaches rewards are given for the LLC’s displacement between its first planned step and its second for the whole series of multiple footstep plans. This teaches a reward based on the displacement between one or more actions) (v) a fifth reward based on a similarity of the one or more actions to one or more recordings of humans performing the plurality of types of motions. ( 7 Results “The character was designed to have similar measurements to those of the human subject. By default, we use the results based on the motion capture styles,” 5.1 Reference Motion “The selected clip then acts as the reference motion to shape the reward function for the LLC over the course of the upcoming step.” 5.6 Style Modification “In addition to imitating a reference motion, the LLC can also be stylized by simple modifications to the reward function. In the following examples, we consider the addition of a style term cstyle where cstyle provides an interface through which the user can shape the motion of the LLC … Forward/Sideways Lean … Straight Leg(s) … High-Knees” Note: Here Peng directly teaches that rewards relating to imitating human recorded motion are used to train Peng’s character motion, and that a plurality of motions, the 3 walking styles presented, can be learned by Peng’s model via a reward function.)
Peng does not teach rewards based on head heights or head tracking, doing so is taught in Tao which teaches operations to train a first machine learning model to generate the trained machine learning model based on (ii) a second reward based on a difference in height between a head of the character during the one or more actions and the one or more paths, (Tao 7 Experimentation Setup and Task Specification “The character mainly focuses on finding a coarse get-up solution by maximizing the head height. Each training episode ends when it exceeds 250 steps without any early termination criteria … The reward function consist of the following terms: (1) 𝑟h: maximize the head height,” Note: Tao teaches that during “one or more actions”, in this case the character taking steps, the head height is measured and rewards are based on the head height. This teaches rewards based on difference of head height throughout the action as Tao teaches the head height is rewarded for being maximized, meaning if the character walks consistently without errors such as stumbling, or other actions that would minimize head height, a higher reward is given.) (iii) a third reward based on an alignment of a direction of the head of the character during the one or more actions with the one or more paths, (C Detailed Explanation of State Variables “As illustrated in Fig. 12, the head height is measured from the head link to the ground along the vertical direction. The end-effector positions are computed as the translational offset between the end-effectors and the root link in egocentric coordinates. The state also involves a variable indicating the straightness of the torso [𝑥 up torso, 𝑦 up torso, 𝑧 up torso] as the z-component of the torso orientation vectors [𝑋torso, 𝑌torso, 𝑍torso].” Note: The head height, which has already been established to be a variable which rewards are based on, is taught to be measured based a direction, teaching a reward based on alignment of a direction of the head.) (iv) a fourth reward based on the head of the character during the one or more actions being at a highest height, (Tao 7 Experimentation Setup and Task Specification, cited previously, details that the reward function is based on maximizing head height in the context of performing actions, in this case walking.)
It would have been obvious to a person having ordinary skill in the art before the effective filing
date of the claimed invention to modify Peng with Tao where aside from rewards are not just based on displacement between actions and motion similarity to a reference motion but are also based on head height consistency, maximum height, and direction alignment.
There are several reasons that would motivate one to do so, one would be to decrease the chance of the character falling or taking suboptimal uneven terrain by giving rewards when it maintains consistent, maximized, and properly aligned head height.
Regarding claim 9,
Peng teaches:
The computer-implemented method of claim 8, wherein the second reward is generated by a second machine learning model that is trained simultaneously with the first machine learning model. (Peng 3 Overview “The environment then also provides separate reward signals rH and rL to the HLC and LLC, re-ecting progress towards their respective goals gH and gL. Both controllers are trained with a common actor-critic learning algorithm. The policy (actor) is trained using a positive-temporal difference update scheme modeled after CACLA [Van Hasselt 2012], and the value function (critic) is trained using Bellman backups.” 4 Policy Representation and Learning “In reinforcement learning, the objective is often to learn an optimal policy π ∗ that maximizes the expected long term cumulative reward J(π), … Algorithm 1 illustrates the common learning algorithm for both the LLC and HLC. For the purpose of learning, the character’s experiences are summarized by tuples τi = (si ,дi , ai ,ri ,s 0 i , λi), recording the start state, goal, action, reward, next state, and application of exploration noise for each action performed by the character. Each policy is trained using an actor-critic framework, where a policy π(s,g, a|θµ ) and value functionV (s,д|θv ), with parameters θµ and θv , are learned in tandem To update the value function, a minibatch of n tuples {τi } are sampled from D and used to perform a Bellman backup … The learned value function is then used to update the policy.” Note: Peng teaches its value function, or critic, is trained, meaning it is a machine learning model. As the critic or value function is what creates and handles rewards, this teaches that the reward is generated by a trained machine learning model. Peng also teaches that the value function, or critic, is trained simultaneously to the first machine learning model, which in Peng are the controllers. Peng directly teaches that as the LLC and HLC are learning their evolving components, stored in the tuple τi, in order to also train the critic/value function to better produce rewards.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN GREGORY HAKALA whose telephone number is (571)272-7863. The examiner can normally be reached 8:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571) 270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALAN GREGORY HAKALA/
Examiner, Art Unit 2617
/KING Y POON/Supervisory Patent Examiner, Art Unit 2617