Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicants amendments filed December 8th 2025 have been entered. Applicant’s amendments to the claims have overcome the objection to the specification in the previous non-final office action mailed September 25th 2025, the objections are accordingly withdrawn. Applicant’s amendments to the claims have overcome the previously set forth objection to claim 16, the objection is accordingly withdrawn. Applicant’s amendments to the claims have overcome all previously set forth 35 USC 112(b) rejections and are accordingly withdrawn.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 10, 13, 15, 16, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Niklaus, Simon, Long Mai, and Feng Liu. "Video frame interpolation via adaptive separable convolution." Proceedings of the IEEE international conference on computer vision. 2017. (hereinafter "Niklaus") in view of Teramura (EP 4093025) and Li X, Zhang B, Liao J, Sander PV. Deep sketch-guided cartoon video inbetweening. IEEE Transactions on Visualization and Computer Graphics. 2021 Jan 5;28(8):2938-52. (Hereinafter “Li”).
Regarding claim 1, Nicklaus teaches A computer-implemented method for rendering animations, the method comprising:
receiving at least one input, wherein each of the at least one input comprises an image dataset (page 2, section 3, paragraphs 1 & 2 – video frames );
selecting a set of keyframes associated with the received at least one input, wherein a keyframe comprises predetermined values of a set of rendering parameters (page 2, section 3, paragraph 2 first sentence – two input video frames, page 4 section “data augmentation”);
generating intermediary frames in a temporal sequence between two consecutive ones of the keyframes (page 2, section 3, paragraph 2, first sentence: “interpolate a frame temporally in the middle”), wherein the consecutive keyframes are consecutive according to a temporal ordering of the keyframes within the selected set of keyframes (page 8, left-hand column, last paragraph - right-hand column first paragraph), wherein generating the intermediary frames is based on optimizing a perceptual metric associated with the selected set of keyframes and the generated intermediary frames (page 3 left column, page 4, left column lines 12-20, equation 3), and wherein optimizing the perceptual metric comprises optimizing the perceptual metric as a function of values of the set of rendering parameters associated with each of the intermediary frames (page 3 left column, last sentence of section 3; page 4, left column lines 12-20, section “data augmentation”, eq 3); and
Nicklaus describes a method of interpolating intermediate keyframes given input frames. This method includes optimizing a perceptual metric, using input video frames which include the parameters used in rendering. Nicklaus fails to teach rendering animations of medical images, and wherein the at least one input comprises a medical image dataset; rendering an animation using the generated intermediary frames and the selected set of keyframes.
However, teaches Teramura teaches rendering animations of medical images (paragraph [0037] - moving image is analogous to animation), and at least one input comprises a medical image dataset (paragraph [0024]); Teramura is considered analogous to the claimed invention as it is in the same field of medical imaging and image processing. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Teramura with Nicklaus to implement the animation methodology of Nicklaus in a medical imaging context to help improve visual quality of intermediate frame interpolation..
Nicklaus in view of Teramura fail to teach rendering an animation using the generated intermediary frames and the selected set of keyframes
However, Li teaches rendering an animation using the generated intermediary frames and the selected set of keyframes (Figs 1 & 2, section 5.1, section 3.4 – “generated video” is a rendered animation).
Li is considered analogous to the claimed invention as it is in the same field of animation frame synthesis. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Li with Nicklaus in view of Teramura
Regarding claim 2, Nicklaus in view of Teramura and Li teaches the method according to claim 1. Nicklaus further teaches wherein the perceptual metric is selected from the group consisting of: perceptual hash, pHash; structural similarity; visual entropy; and blend reference image spatial quality evaluator, BRISQUE (Page 4 col 1 lines 18-20, SSIM is a structural similarity metric).
Regarding claim 3, Nicklaus in view of Teramura and Li teaches the method according to claim 1. Nicklaus further teaches wherein the at least one input comprises a two-dimensional image dataset (Page 2 section 3 paragraph 2 first sentence - a video frame is two dimensional). Nicklaus fails to teach a two-dimensional medical image dataset.
However Teramura teaches a two-dimensional medical image dataset (paragraph [0029]).
Regarding claim 4, Nicklaus in view of Teramura and Li, teaches the method according to claim 1. Teramura further teaches wherein the medical image dataset comprised in at least one input is received from a medical scanner (paragraph [0037]).
Regarding claim 5, Nicklaus in view of Teramura and Li, teaches the method according to claim 1. Teramura further teaches wherein the medical image dataset comprises at least two different medical image datasets obtained from at least two different medical scanners (paragraph [0037]).
Regarding claim 10, Nicklaus in view Teramura and Li teaches the method according to claim 1. Nicklaus further teaches wherein a length of a time interval between consecutive intermediary frames and/or an intermediary frame rate is constant between two consecutive keyframes (page 8, left-hand column, last paragraph - right-hand column first paragraph).
Regarding claim 13, Nicklaus in view of Teramura and Li teaches the method according to claim 1. Nicklaus further teaches wherein the input comprises one or more animations, and wherein the one or more animations are comprised in the rendered animation (age 2, section 3, second paragraph, first sentence & figure 2).
Regarding claim 15, Nicklaus in view of Teramura and Li teaches the method according to claim 1. Nicklaus further teaches wherein the method is performed by a neural network and/or using artificial intelligence (Page 3, Fig 2).
Apparatus claim(s) 16 is/are drawn to the method of using as claimed in claim(s) 1. Therefore, the apparatus claim(s) 16 correspond(s) to the method claim(s) 1, and is/are rejected for the same reasons of obviousness as used above.
CRM claim(s) 19 is/are drawn to the method of using as claimed in claim(s) 1. Therefore, the CRM claim(s) 19 correspond(s) to the method claim(s) 1, and is/are rejected for the same reasons of obviousness as used above.
Claim(s) 6, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicklaus in view of Teramura and Li and in further view of S. Weiss and R. Westermann, "Differentiable Direct Volume Rendering," IEEE Transactions on Visualization and Computer Graphics, vol. 28, no. 1, pp. 562-572, 2021. (Hereinafter “Weiss”).
Regarding claim 6, Nicklaus in view of Teramura and Li teach the method according to claim 1. Nicklaus in view of Teramura fails to teach wherein the set of rendering parameters comprises at least one rendering parameter selected from the group consisting of: camera parameter; clipping parameter; classification parameter; and lighting preset parameter.
However, Weiss teaches wherein the set of rendering parameters comprises at least one rendering parameter selected from the group consisting of: camera parameter; clipping parameter; classification parameter; and lighting preset parameter. (Section 1, Paragraph 2 – “For surface rendering, one objective is on the optimization of scene parameters like material properties, lighting conditions, or even geometric shape”, Section 5.1 Paragraph 1 – “The camera is parameterized by longitude and latitude. AD is used to optimize the camera parameters to determine the viewpoint that maximized the selected cost function.”). Weiss describes surface rendering consisting of parameters such as lighting parameters, which is analogous to light preset parameters. Weiss also suggests the use of camera parameters. Weiss is considered analogous to the claimed invention as it is in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the rendering parameter teachings of Weiss with the animation system of Nicklaus in view of Teramura and Li to determine optimal parameters and improve the generation of synthetic images.
Regarding claim 14, Nicklaus in view of Teramura and Li teaches the method according to claim 1. Nicklaus in view of Teramura and Li fails to teach teaches wherein rendering comprises differentiable rendering.
However, Weiss teaches wherein rendering comprises differentiable rendering (title, introduction, conclusion). The motivation to combine Weiss with Nicklaus in view of Teramura and Li would have been the same as in claim 6.
Claim(s) 7, 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicklaus in view of Teramura and Li and in further view of Grabli (US 2011/0205233 A1).
Regarding claim 7, Nicklaus in view of Teramura and Li teaches the method according to claim 1. Nicklaus in view of Teramura and Li fails to teach temporally ordering the keyframes within the selected set of keyframes.
However, Grabli teaches temporally ordering the keyframes within the selected set of keyframes (paragraph [0009]). Grabli describes ordering strokes of a stroke-based animation. This process partially orders strokes for each of the frames and then based on this selects a “temporally coherent” sequence of frames which is an ordered set of frames and analogous to the temporally ordered keyframes described in the limitation.
Grabli is considered analogous to the claimed invention as it is in the same field of image processing and animation. Therefore it would have been obvious to one of ordinary skill in the art to combine the teachings of Grabli with Nicklaus in view of Teramura and Li to implement a method of temporal ordering and improve cohesion of animation.
Regarding claim 8, Nicklaus in view of Teramura and Li and in further view of Grabli teach the method according to claim 7. Nicklaus further teaches a perceptual metric (page 7, line 1 “loss function that optimizes for perceptual quality”). Nicklaus in view of Teramura fails to teach temporally ordering is based on optimizing a perceptual metric.
However, Grabli further teaches wherein temporally ordering comprises temporally ordering is based on optimizing a metric (paragraphs [0005], [0006]). Grabli describes using geometric considerations in ordering. These geometric considerations can be considered metrics. The motivation to combine is the same as claim 8.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicklaus in view of Teramura, Li and Grabli and in further view of Tasse (US 2023/0290036 A1) and Knoll (US 9,734,615 B1).
Regarding claim 9, Nicklaus in view of Teramura, Li and Grabli teach the method according to claim 8. Teramura further teaches exceeds a predetermined threshold value (paragraph [0068]).
Nicklaus in view of Teramura, Li and Grabli fails to teach wherein an initial temporal ordering is modified when a value of the perceptual metric for the temporal ordering is indicative of a perceptual dissimilarity.
However Tasse teaches a value of the perceptual metric exceeds a predetermined threshold value indicative of a perceptual dissimilarity (paragraph [0058]). Tasse describes determining the differences present in a 3D mesh and whether or not the dissimilarity meets a threshold to determine which keyframes are to be added to a list of visible keyframe. This is then used to determine texture information about the mesh object. This process is analogous to determining where a value of a perceptual metric exceeds a predetermined threshold indicative of perceptual dissimilarity. Tasse is considered analogous to the claimed invention as it is in the same field of computer graphics. Therefore it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Tasse with Nicklaus in view of Teramura, Li and Grabli in order to implement a determination of dissimilarity and make sure the changes sufficiently warrant an update which can be computationally expensive (paragraph [0059]) .
Nicklaus in view of Teramura, Li and Grabli and in further view of Tasse fail to teach an initial temporal ordering is modified.
However, Knoll teaches an initial temporal ordering is modified (Col. 1 lines 25-30). Knoll describes an editing program which can modify clips along a timeline to create a time-ordered sequence of clips. This is analogous to modifying an initial temporal ordering. Knoll is considered analogous to the claimed invention as it is in the same field of image processing and animation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Knoll with Nicklaus in view of Teramura, Li and Grabli and in further view of Tasse to incorporate modification of the temporal ordering and reduce processing requirement (Col 8, lines 43-45).
Claim(s) 11, 12, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicklaus in view of Teramura and Li and in further view of Tasse, ANONYMOUS; “Increase Space Between Keyframes – Artwork/Animations – Blender Artists Community”, 1 October 2012 (hereinafter "Blender").
Regarding claim 11, Nicklaus in view of Teramura and Li teaches the method according to claim 1. Nicklaus further teaches one or more intermediary frames are associated with the at least one pair of consecutive keyframes (page 2, section 3, paragraph 2, first sentence: “interpolate a frame temporally in the middle”).
Nicklaus in view of Teramura and Li fails to teach wherein generating the intermediary frames in the temporal sequence further comprises extending a length of a time interval between at least one pair of consecutive keyframes within the selected set of keyframes and expanding the selected set of keyframes by promoting one or more intermediary frames to keyframes, which are added to the selected set of keyframes to form an expanded set of keyframes, and wherein generating the intermediary frames in the temporal sequence is repeated for the expanded set of keyframes.
However, Tasse teaches expanding the selected set of keyframes by promoting one or more intermediary frames to keyframes, which are added to the selected set of keyframes to form an expanded set of keyframes (paragraph [0053]-[0055]). Tasse describes adding keyframes to a keyframe queue (analogous to set of keyframes) and in turn expanding the set of keyframes. While Tasse doesn’t specifically describe “promoting” an intermediate keyframe, it serves the same function of expanding the keyframe set and incorporating more keyframes to “cover as much of the physical scene depicted as possible”. Therefore it would have been obvious to one of ordinary skill in the art to combine Tasse with Nicklaus in view of Teramura and Li to improve the keyframe animation’s representation of a scene.
Nicklaus in view of Teramura and Li and in further view of Tasse fails to teach extending a length of a time interval between at least one pair of consecutive keyframes within the selected set of keyframes and wherein generating the intermediary frames in the temporal sequence is repeated for the expanded set of keyframes.
However, Blender teaches extending a length of a time interval between at least one pair of consecutive keyframes within the selected set of keyframes and wherein generating the intermediary frames in the temporal sequence is repeated for the expanded set of keyframes (full page). The Blender reference is an internet article helping a user implement the functionality of increasing space between keyframes in the Blender application. The response from another user shows that in 2012 there existed teachings of extending a length of time interval between keyframes and further interpolating if needed. Blender is considered analogous to the claimed invention as it is in the same field of invention of computer graphics and keyframe editing and interpolation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Blender with Nicklaus in view of Teramura and Li and in further view of Tasse to extend an interval of time and continue interpolation to help create smooth transitions.
Regarding claim 12, with Nicklaus in view of Teramura and Li and in further view of Tasse and Blender teaches the method according to claim 11. Tasse further teaches wherein the promoting of one or more intermediary frames to keyframes is performed when the perceptual metric between consecutive intermediary frames comprising the to-be-promoted one or more intermediary frames exceeds a predetermined threshold indicative of a perceptual dissimilarity (paragraph [0058]).
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicklaus in view of Teramura and Li and in further view of Weiss and Grabli.
Regarding claim 17, Nicklaus in view of Teramura and Li teach the system according to claim 16. Nicklaus further teaches a perceptual metric (page 7, line 1 “loss function that optimizes for perceptual quality”). Nicklaus in view of Teramura and Li fails to teach wherein the set of rendering parameters comprises at least one rendering parameter selected from the group consisting of: camera parameter; clipping parameter; classification parameter; and lighting preset parameter; wherein the computer is configured to temporally order the keyframes within the selected set of keyframes based on optimization of the perceptual metric.
However, Weiss teaches wherein the set of rendering parameters comprises at least one rendering parameter selected from the group consisting of: camera parameter; clipping parameter; classification parameter; and lighting preset parameter. (Section 1, Paragraph 2 – “For surface rendering, one objective is on the optimization of scene parameters like material properties, lighting conditions, or even geometric shape”, Section 5.1 Paragraph 1 – “The camera is parameterized by longitude and latitude. AD is used to optimize the camera parameters to determine the viewpoint that maximized the selected cost function.”). Weiss describes surface rendering consisting of parameters such as lighting parameters, which is analogous to light preset parameters. Weiss also suggests the use of camera parameters. Weiss is considered analogous to the claimed invention as it is in the same field of image processing. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the rendering parameter teachings of Weiss with the animation system of Nicklaus in view of Teramura and Li to determine optimal parameters and improve the generation of synthetic images.
Nicklaus in view of Teramura and Li and Weiss fails to teach wherein the computer is configured to temporally order the keyframes within the selected set of keyframes based on optimization of the perceptual metric.
However Grabli teaches wherein the computer is configured to temporally order the keyframes within the selected set of keyframes based on optimization of the metric (paragraphs [0005], [0006], [0009]). Grabli describes ordering strokes of a stroke-based animation. This process partially orders strokes for each of the frames and then based on this selects a “temporally coherent” sequence of frames which is an ordered set of frames and analogous to the temporally ordered keyframes described in the limitation. Grabli describes using geometric considerations in ordering. These geometric considerations can be considered metrics.
Grabli is considered analogous to the claimed invention as it is in the same field of image processing and animation. Therefore it would have been obvious to one of ordinary skill in the art to combine the teachings of Grabli with Nicklaus in view of Teramura and Li to implement a method of temporal ordering and improve cohesion of animation.
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nicklaus in view of Teramura, Li, Weiss and Grabli and in further view of Weiss and Grabli.
Regarding claim 18, Nicklaus in view of Teramura, Li, Weiss and Grabli teach the system of claim 17. Nicklaus further teaches wherein a length of a time interval between consecutive intermediary frames and/or an intermediary frame rate is constant between two consecutive keyframes (page 8, left-hand column, last paragraph - right-hand column first paragraph); one or more intermediary frames are associated with the at least one pair of consecutive keyframes (page 2, section 3, paragraph 2, first sentence: “interpolate a frame temporally in the middle”).
Nicklaus in view of Teramura and Li fails to teach wherein the intermediary frames are generated in the temporal sequence by extension of a length of a time interval between at least one pair of consecutive keyframes within the selected set of keyframes and expansion of the selected set of keyframes by promotion of one or more intermediary frames to keyframes, which are added to the selected set of keyframes to form an expanded set of keyframes, and wherein generation of the intermediary frames in the temporal sequence is repeated for the expanded set of keyframes.
However, Tasse teaches expansion of the selected set of keyframes by promotion of one or more intermediary frames to keyframes, which are added to the selected set of keyframes to form an expanded set of keyframes (paragraph [0053]-[0055]). Tasse describes adding keyframes to a keyframe queue (analogous to set of keyframes) and in turn expanding the set of keyframes. While Tasse doesn’t specifically describe “promoting” an intermediate keyframe, it serves the same function of expanding the keyframe set and incorporating more keyframes to “cover as much of the physical scene depicted as possible”. Therefore it would have been obvious to one of ordinary skill in the art to combine Tasse with Nicklaus in view of Teramura and Li to improve the keyframe animation’s representation of a scene.
Nicklaus in view of Teramura and Li and in further view of Tasse fails to teach extension of a length of a time interval between at least one pair of consecutive keyframes within the selected set of keyframes and wherein generation of the intermediary frames in the temporal sequence is repeated for the expanded set of keyframes.
However, Blender teaches extension of a length of a time interval between at least one pair of consecutive keyframes within the selected set of keyframes and wherein generation of the intermediary frames in the temporal sequence is repeated for the expanded set of keyframes. (full page). The Blender reference is an internet article helping a user implement the functionality of increasing space between keyframes in the Blender application. The response from another user shows that in 2012 there existed teachings of extending a length of time interval between keyframes and further interpolating if needed. Blender is considered analogous to the claimed invention as it is in the same field of invention of computer graphics and keyframe editing and interpolation. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date, to combine the teachings of Blender with Nicklaus in view of Teramura and Li and in further view of Tasse to extend an interval of time and continue interpolation to help create smooth transitions.
Response to Arguments
Applicant’s arguments, see page 11 lines 11-14, filed 12/08/2025, with respect to the rejection(s) of claim(s) 1 under 103 have been fully considered and are persuasive. Applicant’s assertion that Nicklaus synthesizes a single missing frame and does not teach or suggest a rendered animation is persuasive. While the interpolated frame of Nicklaus is a part of a video (which can be considered an animation), there is no explicit disclosure of rendering an animation. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Li.
Applicant's arguments filed 12/08/2025 have been fully considered but they are not persuasive.
Applicant argues on pages 9 and 10 that a keyframe is not just an image frame. Applicant further specifies “Applicants’ specification explains, for example, that a keyframe stores the rendering-parameter configuration used to produce a rendered view”. Examiner respectfully disagrees with this argument. An image, under broadest reasonable interpretation, can be considered analogous to a keyframe.
Applicant’s additionally argues keyframes are different from image frames based on their containing rendering parameters. Applicant argues this on page 10, stating “the Applicants’ specification explicitly defines rendering parameters as camera pose, clipping parameters, classification, lighting presets, transfer functions etc.”. Examiner agrees that the original rejection pointing to pixels as rendering parameters not clear. However, Nicklaus does teach other rendering parameters and their optimization. More specifically, on page 4 Nicklaus describes optimizing the size of images and cropping them for training. This image size can be considered a rendering parameter which is consistent with Applicant’s specification, and because 112(f) is not invoked for the “rendering parameters” limitations, applicant’s argument relating to the specific rendering parameters described in the specification is irrelevant to the claim rejection.
Applicant further argues on pages 10 and 11 that Niklaus does not teach optimizing as claimed as “Nicklaus uses a L1 or perceptual loss only during neural-networking training, not when generating interpolation results”. Examiner respectfully disagrees, the claim states the intermediary frame is “based on” optimizing perceptual loss. Nicklaus generates the frame based on the trained model which optimizes perceptual loss, which is analogous to being “based on” optimizing perceptual loss.
Applicant argues on pages 11-13 that the motivation to combine Nicklaus and Teramura is not sufficient.
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify Nicklaus’ frame interpolation to include medical images because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically, Nicklaus’ video frame input and Teramura’s medical images producing an animation, perform the same general and predictable function, the predictable function being animation. Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself – that is the substation of video image input by replacing it with medical image input. Thus the simple substitution of one known element for another producing a predictable result renders the claim obvious.
Applicant argues that the references “address distinct technologies that operate on different data structure, different input types, and different computational objectives”. Examiner does not find this argument persuasive. Nowhere in the claims is a specific data structure described. Therefore an image in a medical context is a medical image and without further specification as to the difference in data structure or input type, the Teramura reference address the same type of technology and would be obvious to combine for the motivation of improving visual quality of intermediate frame interpolation substituting video input with medical images.
Applicant argues on page 13 that Nicklaus uses “L1 loss, perceptual loss via VGG features and warping losses as part of training a neural network” which “are not pHash, not structural similarity (SSIM), not visual entropy and not BRISQUE” and “never applies any perceptual metric during the during the interpolation operation itself, not does it identify or reference any of the metrics as claimed”. This argument is not persuasive. Nicklaus directly suggests use of an SSIM loss function in Page 4 Column 1, lines 18-20 stating “We tried various loss functions based on different feature extractors, such as SSIM loss”. And hence taking the broadest reasonable interpretation, the prior art reads upon the claims as currently stated.
Applicant argues on pages 13 and 14 that claim 11 describes an automatic lengthening of time between certain keyframes by inserting new keyframes and repeats the interpolation process using the expanded keyframes. However this is not persuasive because Nicklaus and Teramura are not being relied upon to reject the limitations of claim 11 but it is the combination of Nicklaus and Teramura in view of Blender and Tasse.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Duran (US 20130271472 A1).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aidan W McCoy whose telephone number is (571)272-5935. The examiner can normally be reached 8:00 AM-5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571)272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AIDAN W MCCOY/Examiner, Art Unit 2611
/TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611