DETAILED ACTION
This office action is in response to applicant’s amendment/argument filed 12/02/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s response includes four main arguments against the rejection of claims 1-20 under 35 U.S.C. 102 or 103.
1. Applicant argues that the amended independent claim 1, as well as similar claims and their dependents, overcome the rejection under 35 U.S.C. 102 due to the addition of the limitation “combining independently denoised motion segments corresponding to the text prompts of the timeline within an individual denoising step of one or more denoising steps using a diffusion model.”, in particular stating that the denoised motion segments taught by Shafir are not fully independent of each other.
This argument has been fully considered but it is not persuasive.
Qian teaches combining independently generated motion segments (fig. 2: “The model contains two sub-nets Navae and Ncvae, where Navae generates motions of single atomic action and Ncvae refines and connects the generated motions of atomic actions.”) Qian additionally teaches the use of a diffusion model; its exact mechanism is not explained, as noted in the applicant’s arguments, but one of ordinary skill in the art would recognize that most diffusion models operate via a denoising procedure. The limitation of combining denoised motion segments within an individual denoising step is taught by Shafir (see claim 1 citations in the following section).
Though Qian is vague on the details of its diffusion model, it is explicitly designed to be combined with other models (pg. 2309 col. 2 “Note that the model and training framework we propose is flexible rather than being specific to certain methods, where the encoders and decoders can be easily replaced by other kinds of motion synthesis and semantic representation extraction methods”) – for instance, the invention of Shafir.
Therefore, the rejection of the contested limitations found in the amended claim 1 is maintained.
2. Applicant argues that the amended claim 6, as well as similar claims and their dependents, overcome the rejection under 35 U.S.C. 103 due to the addition of the limitation “spatially stitching overlapping motion segments by body part within the individual denoising step”.
Applicant’s arguments with respect to the rejection(s) of claim 6 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Athanasiou et al. (SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation); see following section for specific references, rationale, and motivation to combine.
3. Applicant argues that the amended claim 8 overcomes the rejection under 35 U.S.C. 103 due to the addition of the limitation “within the individual denoising step”.
In particular, applicant argues that the “the cited technique appears to focus on synthetic data generation, not diffusion or denoising”.
This argument has been fully considered but it is not persuasive.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Shafir is relied upon to teach the limitation of combining motion segments within an individual denoising step; Athanasiou teaches the remaining limitations.
Furthermore, the claimed invention itself and most of the other relevant references, including Athanasiou, are all applied towards synthetic data generation, making them analogous art. Diffusion, via denoising, is just one of multiple types of machine learning techniques by which synthetic data can be generated; Athanasiou specifically states that it is designed to be compatible with other techniques (pg. 3 col. 1 “Our approach is complementary and applicable to existing models for text-to-motion synthesis.”) Therefore, it would have been obvious to one of ordinary skill in the art to have combined Athanasiou with the invention of Shafir, which does teach combining motion segments within an individual denoising step.
Therefore, the rejection of claim 8 under 35 U.S.C. 103 is maintained.
4. Applicant argues that the amended claim 9 does not teach “combining denoised segments associated with a common body part” because Shafir, which teaches combining denoised motion segments, does not teach that they are denoised independently, and Athanasiou, which teaches combining overlapping segments associated with a common body part, does not teach that the motion segments are denoised.
This argument has been fully considered but it is not persuasive.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Qian teaches combining motion segments which are individually generated; Shafir teaches combining denoised motion segments within an individual denoising step; Athanasiou teaches combining overlapping motion segments associated with a common body part. Rationale for combining these references is included in the following section.
Therefore, the rejection of claim 9 under 35 U.S.C. 103 is maintained.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7, 11, 17, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qian et al. ("Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions", hereinafter "Qian") in view of Shafir et al. ("Human Motion Diffusion as a Generative Prior", hereinafter "Shafir").
Regarding claim 1, Qian teaches: One or more processors comprising one or more processing units (pg. 2312 col. 2 “Our model is trained on eight V100 GPUs”) to:
generate a timeline that includes an arrangement of text prompts in corresponding temporal intervals (fig. 1 shows text prompt inputs in chronological order with corresponding duration inputs; fig. 3 shows text prompts represented as time intervals arranged chronologically); and
generate, based at least on combining independently generated motion segments (fig. 2, motion segments are initially generated independently and subsequently combined: “The model contains two sub-nets Navae and Ncvae, where Navae generates motions of single atomic action and Ncvae refines and connects the generated motions of atomic actions.”) corresponding to the text prompts of the timeline (fig. 2, atomic actions generated based on “Action Description” input) using a diffusion model (pg. 2309 col. 2 “Note that the model and training framework we propose is flexible rather than being specific to certain methods, where the encoders and decoders can be easily replaced by other kinds of motion synthesis and semantic representation extraction methods (e.g., GAN[1], Diffusion[39], Bert[8], Distilled Bert[36], or GPT3[9]).”), a representation of a motion sequence of a character corresponding to the timeline (figs. 1 and 3 show motion sequences generated from corresponding text prompt sequences).
Qian does not explicitly teach that that the independently generated motion segments of Qian are independently denoised, though one of ordinary skill in the art may come to this conclusion because denoising is a standard part of the generative process of diffusion models. Also, Qian does not explicitly teach that the combination of motion segments occurs within an individual denoising step of one or more denoising steps.
Shafir teaches: denoising a motion segment as part of a method of generating 3D human animation from text input using a diffusion model (pg. 3 col. 1 “Method” section: “In this work, we use the recent Motion Diffusion Model (MDM) [Tevet et al. 2023], pre-trained for the task of text-to-motion, to learn new generative tasks… MDM is a denoising diffusion model based on the DDPM [Ho et al. 2020] framework.”, and combining denoised motion segments corresponding to the text prompts of the timeline within an individual denoising step of one or more denoising steps using a diffusion model (pg. 2 col. 1 “DoubleTake consists of two phases for every diffusion iteration - in the first step, the individual motions, or intervals, are generated together in the same batch, each aware of the context of its neighboring intervals. Then, the second take refines the transitions between intervals to better match those generated in the previous phase.”; where “refining the transitions between intervals” corresponds to the claim limitation of “combining motion segments”; fig. 3 shows that the motion segments correspond to text prompts on a timeline; diffusion model of Shafir uses denoising as evidenced by previous citation.).
Qian and Shafir are both analogous to the claimed invention because they are in the same field of using a diffusion model to generate 3D human motion animation from text input. Furthermore, the invention of Qian is explicitly designed for combination, intended to incorporate various other methodologies and types of neural networks (pg. 2309 col. 2 “Note that the model and training framework we propose is flexible rather than being specific to certain methods, where the encoders and decoders can be easily replaced by other kinds of motion synthesis and semantic representation extraction methods”). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the invention of Qian with the teachings of Shafir to specify the structure and functionality of the diffusion model being used. The motivation would have to make it clearer for a user to operate the invention of Qian using a diffusion model, as it is vague on some of the details.
Regarding claim 7, the combination of Qian in view of Shafir teaches: The one or more processors of claim 1, wherein the one or more processing units are further to combine the independently denoised motion segments (Qian teaches combining independently generated motion segments; Shafir teaches combining denoised motion segments; see claim 1 for references, rationale, and motivation to combine) based at least on spatially and temporally (Qian fig. 3 multiple actions occur in sequence, requiring a temporal combination, but the action “jump in place” happens concurrently through all of the other actions, requiring a spatial combination; also see pg. 2308 section “Prompt Engineering for Textual Description”) stitching the independently denoised motion segments (Qian fig. 2 shows combination of independently generated motion segments: “The model contains two sub-nets Navae and Ncvae, where Navae generates motions of single atomic action and Ncvae refines and connects the generated motions of atomic actions.”; Shafir teaches the “denoised” aspect as previously discussed) within the individual denoising step (Shafir pg. 2 col. 1 “DoubleTake consists of two phases for every diffusion iteration - in the first step, the individual motions, or intervals, are generated together in the same batch, each aware of the context of its neighboring intervals. Then, the second take refines the transitions between intervals to better match those generated in the previous phase.”; where “refining the transitions between intervals” corresponds to the claim limitation of “temporally stitching motion segments” because the transitions occur between different motion segments in time; fig. 3 shows the connections between motion segments along a timeline).
The motivation for modifying the invention of Qian with the teachings of Shafir would have been similar to the motivation described in the rejection of claim 1.
Regarding claims 11 and 19, they are rejected using the same references, rationales, and motivation to combine described in the rejection of claim 1 because their limitations substantially correspond to the limitations of claim 1.
Regarding claim 17, it is rejected using the same references, rationales, and motivation to combine described in the rejection of claim 7 because its limitations substantially correspond to the limitations of claim 7.
Claim(s) 2-5 and 12-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qian (Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions) in view of Shafir et al. (Human Motion Diffusion as a Generative Prior) as applied to claims 1 and 11 above, and further in view of Babcock et al. (US 20150363960 A1, hereinafter "Babcock").
Regarding claim 2, the combination of Qian in view of Shafir teaches: The one or more processors of claim 1, wherein the one or more processing units are further to generate the timeline via input representative of the arrangement of the text prompts on the timeline (Qian fig. 3 shows chronological sequence of text prompts based on original text input: “Illustration of temporal segmenting and prompt engineering. The translucent rounded rectangles with different colors represent the original annotations. The texts above brackets are modified descriptions Wi after segmenting and prompt engineering of atomic action ai.”).
The combination of Qian in view of Shafir does not explicitly teach that the input is accepted via a graphical user interface, or arranged in a plurality of tracks on the timeline.
Babcock teaches a graphical user interface to control a 3D animation, where the interface is arranged in a plurality of tracks on a timeline (fig. 3, [0030] “FIG. 3 depicts a window 300 including a graphical user interface (GUI) 310 for producing a computer-generated animation. The GUI 310 includes a first view of a two-dimensional array of cells 320 that provides a timeline of data used for the computer-generated animation.”, [0031] “Rows of the array 320 are associated with elements of the animation, and columns are associated with frames of the animation.”).
Multi-track graphical user interfaces are standard for animation software (also see additional references in conclusion section). Furthermore, Babcock and the combination of Qian in view of Shafir are both analogous to the claimed invention because they are in the same field of 3D animation of a humanoid character. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the invention of Qian in view of Shafir to incorporate the multi-track graphical user interface of Babcock. The motivation would have been to improve user experience, allowing a user to visualize and organize their text inputs.
Regarding claim 3, the combination of Qian in view of Shafir teaches: The one or more processors of claim 1, wherein the one or more processing units are further to generate the timeline via input specifying a temporal interval for at least one of the text prompts (Qian fig. 1, input durations are provided along with text prompts).
The combination of Qian in view of Shafir does not explicitly teach a graphical user interface that accepts the input.
Babcock teaches a graphical user interface which is arranged as a timeline and used to control a 3D animation (fig. 3, [0030] “FIG. 3 depicts a window 300 including a graphical user interface (GUI) 310 for producing a computer-generated animation. The GUI 310 includes a first view of a two-dimensional array of cells 320 that provides a timeline of data used for the computer-generated animation.”, [0031] “Rows of the array 320 are associated with elements of the animation, and columns are associated with frames of the animation.”)
The motivation for modifying the invention of Qian in view of Shafir with the graphical user interface of Babcock would have been similar to the motivation described in the rejection of claim 2.
Regarding claim 4, the combination of Qian in view of Shafir teaches: The one or more processors of claim 1, wherein the one or more processing units are further to generate the timeline via input specifying a temporal composition of a sequence of the text prompts instructing a sequence of actions to be performed by the character in non-overlapping temporal intervals (Qian fig. 1, text inputs represent a temporal composition of four actions, where the corresponding temporal intervals do not overlap).
The combination of Qian in view of Shafir does not explicitly teach a graphical user interface that accepts the input.
Babcock teaches a graphical user interface which is arranged as a timeline and used to control a 3D animation (same citations as claim 2).
The motivation for modifying the invention of Qian in view of Shafir with the graphical user interface of Babcock would have been similar to the motivation described in the rejection of claim 2.
Regarding claim 5, the combination of Qian and Shafir teaches: The one or more processors of claim 1, wherein the one or more processing units are further to generate the timeline via input specifying a spatial composition of a set of the text prompts instructing actions to be performed by the character simultaneously with different body parts (Qian pg. 2308 col. 1 describes model input: “Clearly defined, the model takes in natural language descriptions W1:m = {W1, W2, ..., Wm} of the complicated action A1:m = {a1, a2, ..., am}, duration of each atomic action D1:m = {D1, D2, ..., Dm}”; actions and durations are used to generate timeline as seen in fig. 3, where certain atomic actions performed using different body parts overlap each other on the timeline; pg. 2308 col. 2 “In other words, multiple atomic actions may happen simultaneously during some temporal stages.”).
The combination of Qian in view of Shafir does not explicitly teach a graphical user interface that accepts the input.
Babcock teaches a graphical user interface which is arranged as a timeline and used to control a 3D animation (same citations as claim 2).
The motivation for modifying the invention of Qian in view of Shafir with the graphical user interface of Babcock would have been similar to the motivation described in the rejection of claim 2.
Regarding claims 12, 13, 14, and 15, they are rejected using the same references, rationales, and motivations to combine described in the rejections of claims 2, 3, 4, and 5 respectively, because their limitations substantially correspond to the limitations of claims 2, 3, 4, and 5 respectively.
Claim(s) 6, 9, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qian (Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions) in view of Shafir et al. (Human Motion Diffusion as a Generative Prior) as applied to claims 1 and 11 above, and further in view of Athanasiou et al. (SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation, hereinafter "Athanasiou").
Regarding claim 6, the combination of Qian in view of Shafir teaches: The one or more processors of claim 1, wherein the one or more processing units are further to combine the independently denoised motion segments within the individual denoising step (Qian teaches combining independently generated motion segments; Shafir teaches combining denoised motion segments within an individual denoising step; see claim 1 for references, rationale, and motivation to combine).
The combination of Qian in view of Shafir does not explicitly teach that the motion segments are combined within the individual denoising step based at least on spatially stitching overlapping motion segments by body part.
Athanasiou teaches combining motion segments based at least on spatially stitching overlapping motion segments by body part (Abstract “spatial compositing requires understanding which body parts are involved in which action, to be able to move them simultaneously”; fig. 1 and 2 show the spatial combination of simultaneous, overlapping motions by isolating body parts and compositing their motions; fig. 4 shows how the invention of Athanasiou improves upon other models because it is able to combine overlapping motions containing the same body part).
Athanasiou and the combination of Qian and Shafir are both analogous to the claimed invention because they are in the same field of generating 3D human motion animation from text input. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the invention of Qian in view of Shafir with the teachings of Athanasiou to subdivide generated motion segments to the level of individual body parts. The motivation would have been to enable a finer level of control over the model in order to improve the smoothness of the transitions.
Regarding claim 9, the combination of Qian in view of Shafir teaches: The one or more processors of claim 1, wherein the one or more processing units are further to expand two or more of the temporal intervals to overlap with each other (Shafir pg. 3 fig. 3, “handshake” sections are expanded overlapping section of the temporal intervals), generate overlapping denoised segments (Shafir pg. 3 fig. 3 motion segments including “handshakes”) corresponding to the independently denoised motion segments (Qian teaches independently generated motion segments; Shafir teaches denoised motion segments; see claim 1 for references, rationale, and motivation to combine) based at least on denoising an expanded motion segment for at least one of the text prompts on the timeline (Shafir fig. 3 “At the first take, we generate each interval as a single sample handshaking neighboring samples”, where each interval corresponds to a text prompt on the timeline), and combine the overlapping denoised segments within the individual denoising step (Shafir pg. 2 col. 1 “DoubleTake consists of two phases for every diffusion iteration - in the first step, the individual motions, or intervals, are generated together in the same batch, each aware of the context of its neighboring intervals. Then, the second take refines the transitions between intervals to better match those generated in the previous phase.”; where “refining the transitions between intervals” corresponds to the claim limitation of “combining overlapping motion segments”).
The combination of Qian and Shafir does not explicitly teach: combine the overlapping denoised segments associated with a common body part.
Athanasiou teaches identifying, isolating and combining components of human motion animations based on the individual body parts (fig. 2 “Body parts”, “Single motions”, and “Composited motion” steps, “Here, we combine two motion sequences from the training set with the corresponding labels ‘stroll’ and ‘raise arms’. We first prompt GPT-3 with the instructions, few-shot examples containing question-answer pairs, and giving the action of interest in the last question without the answer. We minimally post-process the output of GPT-3 to assign this action to a set of body parts. The relevant body parts from each motion are then stitched together to form a new synthetically composited motion.”).
Athanasiou and the combination of Qian and Shafir are both analogous to the claimed invention because they are in the same field of generating 3D human motion animation from text input. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the invention of Qian in view of Shafir with the teachings of Athanasiou to subdivide generated motion segments to the level of individual body parts and perform the overlapping procedure of Shafir on the subdivided segments. The motivation would have been to enable a finer level of control over the model in order to improve the smoothness of the transitions.
Regarding claim 16, it is rejected using the same references, rationales, and motivations to combine described in the rejection of claim 6 because its limitations substantially correspond to the limitations of claim 6.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qian (Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions) in view of Shafir et al. (Human Motion Diffusion as a Generative Prior) as applied to claim 1 above, and further in view of Athanasiou (SINC: Spatial Composition of 3D Human Motions for Simultaneous Action Generation) and Babcock (US 20150363960 A1).
Regarding claim 8, the combination of Qian in view of Shafir teaches: The one or more processors of claim 1, as well as the concept of denoised motion segments and combining the denoised motion segments within the individual denoising step (Shafir, see claim 1).
The combination of Qian and Shafir does not explicitly teach: wherein the one or more processing units are further to generate the motion sequence based at least on extracting denoised per-part motion segments associated with different body parts from full-body denoised motion segments and combining the denoised per-part motion segments.
Athanasiou teaches: wherein the one or more processing units are further to generate the motion sequence based at least on extracting per-part motion segments associated with different body parts from full-body motion segments and combining the per-part motion segments (fig. 2 “Single motions” and “Composited motion” steps, “Here, we combine two motion sequences from the training set with the corresponding labels ‘stroll’ and ‘raise arms’. We first prompt GPT-3 with the instructions, few-shot examples containing question-answer pairs, and giving the action of interest in the last question without the answer. We minimally post-process the output of GPT-3 to assign this action to a set of body parts. The relevant body parts from each motion are then stitched together to form a new synthetically composited motion.”).
Athanasiou and the combination of Qian in view of Shafir are both analogous to the claimed invention because they are in the same field of generating 3D human motion animation from text input. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the invention of Qian in view of Shafir, which focuses more on temporal composition, to add methodology for incorporating the spatial composition focus of Athanasiou to more easily allow combinations of multiple simultaneous actions performed by particular body parts to be generated, and to incorporate this methodology into the denoising-based diffusion process of Qian in view of Shafir. The motivation would have been to enable a finer level of control over the model and expand its ability to generate more complex combinations of motions.
The combination of Qian and Shafir and further in view of Athanasiou does not explicitly teach that the motion segments are associated with corresponding body part tracks.
Babcock teaches an interface for controlling a 3D character animation in which animations of different body parts are associated with corresponding body part tracks (fig. 3, [0032] “Returning to FIG. 3, a row may be selected/de-selected to display/hide additional rows. Row 331 is selected as indicated by the filled-in dot on the right side of the row. Underneath row 331 is row 332, which is associated with a part of character 400, specifically the jaw 410. Underneath row 332 are two rows associated with the position of the jaw 410. Row 333 is associated with the orientation of the jaw (“Jaw Orientation”), as indicated by the curved arrow icon, and row 337 is associated with the position of the jaw (“Jaw Position”), as indicated by the orthogonal arrows.”).
The motivation for modifying the invention of Qian in view of Athanasiou and Shafir with the teachings of Babcock would have been similar to the motivation for modifying Qian with Babcock described in the rejection of claim 2.
Claim(s) 10, 18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qian (Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions) in view of Shafir et al. (Human Motion Diffusion as a Generative Prior) as applied to claims 1, 11, and 19 above, and further in view of Nishimura et al. ("Long-term Motion Generation for Interactive Humanoid Robots using GAN with Convolutional Network", hereinafter "Nishimura").
Regarding claim 10, the combination of Qian in view of Shafir teaches: The one or more processors of claim 1, but does not explicitly teach: wherein the one or more processors are comprised in at least one of:
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for 3D assets;
a system for performing deep learning operations;
a system for performing remote operations;
a system for performing real-time streaming;
a system for generating or presenting one or more of augmented reality content, virtual reality content, or mixed reality content;
a system implemented using an edge device;
a system implemented using a robot;
a system for generating synthetic data;
a system for generating synthetic data using AI;
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center; or
a system implemented at least partially using cloud computing resources.
Nishimura teaches: wherein the processor is comprised in at least one of: a system implemented using a robot (Abstract “In this report, we propose a framework for generating long-term human-like motion based on a deep generative model. Thanks to the network structure, the proposed method allows us generating seemless long-term motions while the model is trained by 4 seconds long short motion samples.”, pg. 375 “…the purpose of this research is to model the non-verbal communication of human during interaction for the application of the motion generation of humanoid robots.”).
Nishimura and the combination of Qian in view of Shafir are both analogous to the claimed invention because they are in the same field of human motion generation using machine learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the text-based motion generation of Qian in view of Shafir with the teachings of Nishimura to apply it towards controlling a humanoid robot. The motivation would have been to provide a simple, natural interface for a user to control the humanoid robot of Nishimura.
Regarding claims 18 and 20, they are rejected using the same references, rationales, and motivations to combine described in the rejection of claim 10 because their limitations substantially correspond to the limitations of claim 10.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENJAMIN STATZ whose telephone number is (571)272-6654. The examiner can normally be reached Mon-Fri 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BENJAMIN TOM STATZ/Examiner, Art Unit 2611
/TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611