DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Election/Restrictions Applicant’s arguments, see Remarks at page 6 , filed 28 January 20 2 6, with respect to the restriction requirement have been fully considered and are persuasive. The requirement is withdrawn. As a result, claims 1-20 are pending and examined on the merits herein. Information Disclosure Statement The information disclosure statements (IDS) submitted on 11/07/2023 and 01/07/2026 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet w ithin the range of 50 to 150 words in length . The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Claim Objections Claim 1 is objected to because of the following informalities: Claim 1 recites, “generating an output image, reducing the noise from the input image, based on the one or more attention values.” In view of the full disclosure, this limitation should describe that the output image is generated by reducing noise from the input image (or that the output image represents a denoised version of the input image) . As currently written, the generating and the reducing appear to be discrete steps. Appropriate correction is required. Claim 12 recites “claim 8,wherein” which is missing a space after the comma. Appropriate correction is required. Claim 15 recites, “one or more output images depicting respective de-noised representations of the noisy image”. In the event that there is only one output image, it would depict only one de-noised representation of the noisy image. Accordingly, the claim should not require that there are always plural “de-noised representations”, and include the option for one. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2 and 8-14 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 2 recites: “ a first stage of the plurality of stages having a first number of pixels based, at least in part, on a first portion of the input image and a second stage of the plurality of stages having a second number of pixels ”. As written, it is unclear how “stages” can have a certain number of pixels. Images have pixels, but it does not appear that the ‘stages’ are images, rendering the claim indefinite. The bolded limitations are being interpreted similarly to the analogous limitations in claim 9, wherein a first and second resolution level have a first and second resolution size, respectively. Claim 8 recites the limitation " the image depicting a de-noised representation of the noisy image”. There is insufficient antecedent basis for this limitation in the claim because it is unclear which image is referred to by ‘the image’. The claim has both a noisy image and an output image. In view of the full disclosure, ‘the image’ is being interpreted as the output image. Claims 9-14 are rejected as dependent on claim 8. Claim 9 recites the limitation "wherein calculating the one or more attention values". There is insufficient antecedent basis for this limitation in the claim. This is being interpreted as, "wherein determining the one or more attention scores ", as claim 8 recites “determine one or more attention scores”. Claim 16 recites, “image data representing a noisy image”, which has improper antecedence. This is being interpreted as the same as the “image data representing a noisy image” introduced in claim . The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 16 rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 16 is fully embodied in claim 15, on which it depends, without providing a further meaningful limitation. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of generating a denoised image using mathematical processes, without significantly more. The claim recites: “A computer-implemented method comprising. accessing an input image, the input image having noise; determining one or more time embeddings associated with a resolution level of the input image; calculating one or more attention values using the one or more time embeddings; and generating an output image, reducing the noise from the input image, based on the one or more attention values.” The limitations, as drafted, are processes that, under their broadest reasonable interpretation, amount to mathematical calculations. Time embeddings can be determined mathematically, calculating attention values using the time embeddings amounts to performing mathematical operations to vectors, and generating the output image amounts to a weighted sum. The accessing an input image amounts to insignificant extra-solution activity (data gathering). This judicial exception is not integrated into a practical application. In particular, the claim describes a computer-implemented method. The computer is recited at a high level of generality such that it amounts to any generic computer. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at a high-level of generality. It is therefore a judicial exception that is not integrated into a practical application, and does not include additional elements that are sufficient to amount to significantly more than the judicial exception. This claim is not patent eligible. Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of calculating attention values at a plurality of stages, where each stage has a pixel number based on a portion of the input image, which amounts to further mathematical processes . The claim is not patent eligible. Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of calculating the attention values based on a machine learning model. The machine learning model is recited at a high level of generality such that it amounts to a generic machine learning model, which fails to meaningfully limit the performance of the abstract idea. The claim is not patent eligible. Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of describing the time embeddings as time steps of a generic machine learning model. The machine learning model is recited at a high level of generality such that it amounts to a generic machine learning model, which fails to meaningfully limit the performance of the abstract idea. The claim is not patent eligible. Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of outputting an image which amounts insignificant extra-solution activity. The claim is not patent eligible. Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of performing calculation using a GPU, which is recited at a high level of generality such that it amounts to no more than a generic GPU. The claim is not patent eligible. Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of calculating attention values using time embeddings (mathematical process) and self-attention blocks of a diffusion model. The self-attention blocks of a diffusion model is recited at a high-level of generality such that it amounts to blocks of a generic diffusion model. The claim is not patent eligible. Claims 8-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a non-transitory computer readable storage medium storing instructions analogous to the method of claims 1-7. The non-transitory computer readable storage medium is recited at a level of generality such that it fails to meaningfully limit the performance of the abstract idea. The claims are not patent eligible. Claims 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a system with a processor, with elements analogous to the method of claims 1, 3, 4, 6, and 7. The processor is recited at a level of generality such that it fails to meaningfully limit the performance of the abstract idea. The claims are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-7, 8, and 10-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dhariwal (Diffusion Models Beat GANs on Image Synthesis) in view of Bao (All are Worth Words: a ViT Backbone for Score-based Diffusion Models) . Regarding claim 1, Dhariwal teaches “A computer-implemented method comprising: accessing an input image, the input image having noise ; determining one or more time embeddings associated with a resolution level of the input image;” ( Dhariwal, Algorithms 1 and 2 each show input of a noisy image x T and output of a denoised image x 0 . Also, alternatively, note that the algorithms disclose a looping process wherein each x t for a plurality of timesteps represent images that could be considered input images. ; Tables 3 and 5 show FID (Fréchet Inception Distance) which indicates quality of de-noising an initial noisy image in the context of the paper; Section 3.1 further describes, “We also experiment with a layer [43] that we refer to as adaptive group normalization (AdaGN), which incorporates the timestep and class embedding into each residual block after a group normalization operation [69], similar to adaptive instance norm [27] and FiLM [48]. We define this layer as AdaGN(h,y) = ys GroupNorm(h)+yb, where his the intermediate activations of the residual block following the first convolution, and y = [ys,yb] is obtained from a linear projection of the timestep and class embedding. We had already seen AdaGN improve our earliest diffusion models, and so had it included by default in all our runs. In Table 3, we explicitly ablate this choice, and find that the adaptive group normalization layer indeed improved FID. Both models use 128 base channels and 2 residual blocks per resolution, multi-resolution attention with 64 channels per head, and BigGAN up/downsampling, and were trained for 700K iterations. In the rest of the paper, we use this final improved model architecture as our default: variable width with 2 residual blocks per resolution, multiple heads with 64 channels per head, attention at 32, 16 and 8 resolutions, BigGAN residual blocks for up and downsampling, and adaptive group normalization for injecting timestep and class embeddings into residual blocks. ” Note that timestep embeddings are injected into each residual block, and there are two residual blocks per resolution, therefore all timesteps are associated with a resolution. ) While Dhariwal discloses calculating attention values at different resolutions, ( Dhariwal, Section 3.1 Paragraphs 2-3, “We had already seen AdaGN improve our earliest diffusion models, and so had it included by default in all our runs. In Table 3, we explicitly ablate this choice, and find that the adaptive group normalization layer indeed improved FID. Both models use 128 base channels and 2 residual blocks per resolution, multi-resolution attention with 64 channels per head , and BigGAN up/downsampling, and were trained for 700K iterations.In the rest of the paper, we use this final improved model architecture as our default: variable width with 2 residual blocks per resolution, multiple heads with 64 channels per head, attention at 32, 16 and 8 resolutions, BigGAN residual blocks for up and downsampling, and adaptive group normalization for injecting timestep and class embeddings into residual blocks. ”) it does not disclose “calculating one or more attention values using the one or more time embeddings;” Bao teaches “calculating one or more attention values using the one or more time embeddings;” ( Bao, Figure 1 and Section 2 Paragraph 1, “We first attempt to train a diffusion model using a vanilla ViT [3] on CIFAR10. For simplicity, we treat everything including the time embedding, label embedding and patches of the noisy image as tokens . With carefully tuned hyperparameters, a 13-layer ViT of size 41M achieves a FID 5.97, which is significantly better than 20.20 of the prior ViT-based diffusion models [18]. We conjecture that this is mainly because our model is larger.” Figure 1, right, shows that embeddings (i.e. time embeddings) are used to calculate the multi-head attention . ) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to incorporate the use of time embeddings for multi-head attention calculation, taught by Bao, into the attention calculation of Dhariwal. The motivation for doing so would have been to improve the overall denoising by calculating time-contextualized attention values . Diffusion models for denoising depend on time step, for example each time step is associated with a different level of noise . Computing attention weights based on time step thus enables improved denoising by factoring in the respective stage . Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Dhariwal with the above teaching of Bao to fully disclose, “calculating one or more attention values using the one or more time embeddings;” Dhariwal in view of Bao further disclose “and generating an output image, reducing the noise from the input image, based on the one or more attention values.” ( Dhariwal, Algorithms 1 and 2 each show input of a noisy image x T and output of a denoised image x 0 . Also, alternatively, note that the algorithms disclose a looping process wherein each iteration outputs an image that could also be mapped to output images; Tables 3 and 5 show FID (Fréchet Inception Distance) which indicates quality of an output denoised image; Figure 1 and its caption show output denoised images using the diffusion model ; similarly, see Figure 6.) Regarding claim 3, Dhariwal in view of Bao teach “ The computer-implemented method of claim 1, ” “ wherein calculating the one or more attention values comprises using one or more machine learning models. ” ( Dhariwal, Section 3.1 Paragraphs 2-3, “We had already seen AdaGN improve our earliest diffusion models, and so had it included by default in all our runs. In Table 3, we explicitly ablate this choice, and find that the adaptive group normalization layer indeed improved FID. Both models use 128 base channels and 2 residual blocks per resolution, multi-resolution attention with 64 channels per head , and BigGAN up/downsampling, and were trained for 700K iterations. In the rest of the paper, we use this final improved model architecture as our default: variable width with 2 residual blocks per resolution, multiple heads with 64 channels per head, attention at 32, 16 and 8 resolutions , BigGAN residual blocks for up and downsampling, and adaptive group normalization for injecting timestep and class embeddings into residual blocks.” ) Regarding claim 4, Dhariwal in view of Bao teach “ The computer-implemented method of claim 1, ” “ wherein the one or more time embeddings represent a time step of one or more layers of a machine learning model. ( Dhariwal, Section 3.1 Paragraphs 1-3, “We also experiment with a layer [43] that we refer to as adaptive group normalization (AdaGN), which incorporates the timestep and class embedding into each residual block after a group normalization operation [69], similar to adaptive instance norm [27] and FiLM [48]. We define this layer as AdaGN(h,y) = ys GroupNorm(h)+yb, where his the intermediate activations of the residual block following the first convolution, and y = [ys,yb] is obtained from a linear projection of the timestep and class embedding . We had already seen AdaGN improve our earliest diffusion models, and so had it included by default in all our runs. In Table 3, we explicitly ablate this choice, and find that the adaptive group normalization layer indeed improved FID. Both models use 128 base channels and 2 residual blocks per resolution, multi-resolution attention with 64 channels per head, and BigGAN up/downsampling, and were trained for 700K iterations. In the rest of the paper, we use this final improved model architecture as our default: variable width with 2 residual blocks per resolution, multiple heads with 64 channels per head, attention at 32, 16 and 8 resolutions, BigGAN residual blocks for up and downsampling, and adaptive group normalization for injecting timestep and class embeddings into residual blocks. ” Accordingly, the timestep embeddings represent a timestep of the residual block, which is a layer of a machine learning model .) Regarding claim 5, Dhariwal in view of Bao teach “The computer-implemented method of claim 1,” “ further comprising, causing the output image to be presented. ” ( Dhariwal, Figures 1 and 6 (middle) .) Regarding claim 6, Dhariwal in view of Bao teach “The computer-implemented method of claim 1,” “wherein the one or more one or more attention values are calculated using one or more graphics processing units (GPUs). ( Dhariwal, Section A.1 discloses the use of the NVIDIA Tesla V100 GPU. ) Regarding claim 7, Dhariwal in view of Bao teach “The computer-implemented method of claim 1,” “ wherein the one or more attention values are generated using one or more self-attention blocks of a diffusion model and the one or more time embeddings. ” ( Dhariwal, Section 3.1 Paragraph 3, “In the rest of the paper, we use this final improved model architecture as our default: variable width with 2 residual blocks per resolution, multiple heads with 64 channels per head, attention at 32, 16 and 8 resolutions , BigGAN residual blocks for up and downsampling, and adaptive group normalization for injecting timestep and class embeddings into residual blocks.”; Bao, Figure 1 and Section 2 Paragraph 1, “We first attempt to train a diffusion model using a vanilla ViT [3] on CIFAR10. For simplicity, we treat everything including the time embedding, label embedding and patches of the noisy image as tokens . With carefully tuned hyperparameters, a 13-layer ViT of size 41M achieves a FID 5.97, which is significantly better than 20.20 of the prior ViT-based diffusion models [18]. We conjecture that this is mainly because our model is larger.” Figure 1, right, shows that embeddings (i.e. time embeddings) are used to calculate the multi-head attention. Note that this was incorporated with rationale and motivation in the rejection of claim 1. As Dhariwal and Bao are combined in the rejection of claim 1, Dhariwal teaches self-attention blocks (32, 16, and 8 resolutions) of a diffusion model where attention values are generated, and Bao teaches the use of time embeddings in this attention value generation. ) Regarding claims 8 and 10-14 , these claims recite a non-transitory computer readable storage medium storing thereon executable instructions corresponding to the steps recited in Claims 1 and 3-7. Therefore, the recited programming instructions of these claims are mapped to the analogous steps in the corresponding method claims. Additionally, the rationale and motivation to combine the Dhariwal and Bao references appl y here. Finally, Dhariwal in view of Bao discloses non-transitory computer readable storage medium storing thereon executable instructions ( Dhariwal, the abstract discloses that the programming instructions are released at github. Storage on github servers amount to the storage of the instructions on a non-transitory computer readable storage medium .) Regarding claims 15-20 , these claims recite a system comprising a processor with elements corresponding to the steps recited in Claims 1, 3, 4, 6, and 7. Therefore, the recited elements of th ese claim s are mapped to the analogous steps in the corresponding method claim s . Additionally, the rationale and motivation to combine the Dhariwal and Bao references applies here. Finally, Dhariwal in view of Bao discloses a system comprising a processor ( Dhariwal, Section A.1 discloses the use of the NVIDIA Tesla V100 GPU. ) Claim(s) 2 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dhariwal in view of Bao further in view of Zhang (VSA: Learning Varied-Size Window Attention in Vision Transformers) . Regarding claim 2, Dhariwal in view of Bao teach “The computer-implemented method of claim 1,” While Dhariwal in view of Bao disclose, “wherein calculating the one or more attention values comprise determining attention scores of a plurality of stages,” wherein a first and second stage have a first and second number or pixels, respectively, ( Dhariwal, Section 3.1 Paragraphs 2-3, “We had already seen AdaGN improve our earliest diffusion models, and so had it included by default in all our runs. In Table 3, we explicitly ablate this choice, and find that the adaptive group normalization layer indeed improved FID. Both models use 128 base channels and 2 residual blocks per resolution, multi-resolution attention with 64 channels per head , and BigGAN up/downsampling, and were trained for 700K iterations.In the rest of the paper, we use this final improved model architecture as our default: variable width with 2 residual blocks per resolution, multiple heads with 64 channels per head, attention at 32, 16 and 8 resolutions, BigGAN residual blocks for up and downsampling, and adaptive group normalization for injecting timestep and class embeddings into residual blocks.” Note that attention is calculated at multiple stages, each stage having a different number of pixels (32, 16, and 8 resolution). ), Dhariwal in view of Bao do not expressly disclose that the numbers of pixels of each stage are based in part on respective portions of the input image. Zhang teaches the numbers of pixels at each stage of a self-attention model being based on respective portions of the input image ( Zhang, Page 3 Paragraph 1 , Figure 1b, and Figure 3b, “To this end, we propose a novel Varied-Size Window Attention (VSA) mechanism to learn adaptive window configurations from data. Different from the previous window-based transformers where query, key, and value tokens are all sampled from the same window as shown in Figure 1(a), VSA employs a window regression module to predict the size and location of the target window based on the tokens within each default window. Then, the key and values tokens are sampled from the target window. By adopting VSA independently for each attention head, it enables the attention layers to model long-term dependencies, capture rich context from diverse windows, and promote information exchange among overlapped windows, as illustrated in Figure 1(b). VSA is an easy-to-implementation module that can replace the window attention in state of-the-art representative models with minor modifications and negligible extra computational cost while improving their performance by a large margin, e.g., 1.1% for Swin-T on ImageNet classification. In addition, the performance gain increases when using larger images for training and test, as shown in Figure 2. With the larger images as input, Swin-T with predefined window sizes cannot adapt to large objects well, and the improvement brought by enlarging image sizes is marginal, i.e., a gain of 0.3% from 224 × 224 to 480 × 480. In contrast, the performance gain of VSA over Swin-T increases significantly from 1.1% to 1.9%, owing to the varied-size window attention. Besides, as VSA can effectively promote information exchange across overlapped windows via token sampling, it does not need the shifted windows mechanism in Swin.” As shown in Figure 3b, window size (number of pixels) varies based on respective portions of the input image. ) It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to incorporate the target window-based attention calculation of Zhang into the self-attention stages of Dhariwal in view of Bao . The motivation for doing so would have been to generate more accurate attention values that are adaptive to image regional context. Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Dhariwal in view of Bao with the above additional teaching of Zhang to fully disclose , “wherein calculating the one or more attention values comprise determining attention scores of a plurality of stages, a first stage of the plurality of stages having a first number of pixels based, at least in part, on a first portion of the input image and a second stage of the plurality of stages having a second number of pixels based, at least in part, on a second portion of the input image.” Regarding claim 9 , this claim recites a non-transitory computer readable storage medium storing thereon executable instructions corresponding to the steps recited in claim 2. Therefore, the recited programming instructions of this claim are mapped to the analogous steps in the corresponding method claim. Note that while there are minor differences in wording between claims 2 and 9, such as ‘stages’ vs ‘resolution levels’, these differences do not change the meaning of the analogous limitations in any meaningful way that prevents the rejection of claim 2 from applying directly to the rejection of claim 9. Additionally, the rationale and motivation to combine the Dhariwal, Bao, and Zhang references applies here. Finally, Dhariwal in view of Bao further in view of Zhang discloses non-transitory computer readable storage medium storing thereon executable instructions ( Dhariwal, the abstract discloses that the programming instructions are released at github. Storage on github servers amount to the storage of the instructions on a non-transitory computer readable storage medium.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Wu (CN 115222630 A) discloses an image denoising process using a diffusion model. Gong (US 20230121890 A1) teaches a method for real-time image denoising that is performed using a U-Net model. Liu (Swin Transformer: Hierarchical Vision Transformer using Shifted Windows) teaches performing self-attention on non-overlapping windows of an input image while allowing cross-window connection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT AARON JOSEPH SORRIN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (703)756-1565 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday - Friday 9am - 5pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Sumati Lefkowitz can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-3638 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON JOSEPH SORRIN/ Examiner, Art Unit 2672 /SUMATI LEFKOWITZ/ Supervisory Patent Examiner, Art Unit 2672