DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “the device being configured to” in claim 14, lines 1-2.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6, 8 and 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Munoz Delgado (US 20210089895 A1) in view of Li et al (arXiv:2010.05981 2021), hereinafter Li.
-Regarding claim 1, Munoz Delgado discloses a computer-implemented method for generating training data for machine learning (Abstract; [0040], “providing training sensor data samples of a training dataset, training the neural network”; [0041], “augmented training dataset”, FIG. 1, neural network 106, system 106; [0075]), including self-monitored learning, the method comprising the following steps (FIGS. 1-6): providing input image data including at least two input images different from one another (FIG. 1, image sources 108; [0067]; FIG. 4, input data samples 402); generating counterfactual image data including at least one counterfactual image based on the input image data ([0029]; FIG. 1, system 106; [0064], “a counterfactual generation system”; [0066]; FIG. 2, system 205; FIG. 4, process 403; [0121]);
Munoz Delgado does not disclose generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images; and providing the labeled image details as training data for the machine learning.
In the same field of endeavor, Li teaches a method to train neural networks for object recognition (Li: Abstract; Figures 1-5). Li further teaches generating counterfactual image data including at least one counterfactual image based on the input image data (Li: Figure 2, training images
PNG
media_image1.png
611
769
media_image1.png
Greyscale
); generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images (Li: Figure 2, label Assignment & Results; Page 3, 2nd paragraph, “Label assignment”); and providing the labeled image details as training data for the machine learning (Li: Abstract; Figure 2,caption; Page 5, Sec. 3, 1st paragraph; Note: it is known that labels are used for training purpose).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Munoz Delgado with the teaching of Li by labeling counterfactual image in order to provide training data for self-monitored learning.
-Regarding claim 2, Munoz Delgado in view of Li teaches the method of claim 1.
Munoz Delgado does not disclose wherein the generating of the counterfactual image data includes: extracting at least one image component from a particular input image of the input image data.
In the same field of endeavor, Li teaches a method to train neural networks for object recognition (Li: Abstract; Figures 1-5). Li further teaches generating counterfactual image data including at least one counterfactual image based on the input image data (Li: Figure 2, training images) and extracting at least one image component from a particular input image of the input image data (Li: Figure 2, Chimpanzee shape, lemon texture; Page 4, 1st paragraph, “extracts the shape”).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Munoz Delgado with the teaching of Li by labeling counterfactual image in order to provide training data for self-monitored learning.
-Regarding claim 3, Munoz Delgado in view of Li teaches the method of claim 2.
Munoz Delgado does not disclose wherein each of the at least one image component includes at least one of the following elements and/or is associated with one of the following elements: an object shape of an object represented in an input image of the at least two input images and/or a texture of an object represented in an input image of the at least two input images, and/or a background of an input image of the at least two input images.
In the same field of endeavor, Li teaches a method to train neural networks for object recognition (Li: Abstract; Figures 1-5). Li further teaches generating counterfactual image data including at least one counterfactual image based on the input image data (Li: Figure 2, training images) wherein each of the at least one image component includes at least one of the following elements and/or is associated with one of the following elements: an object shape of an object represented in an input image of the at least two input images and/or a texture of an object represented in an input image of the at least two input images, and/or a background of an input image of the at least two input images (Li: Figure 2).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Munoz Delgado with the teaching of Li by labeling counterfactual image in order to provide training data for self-monitored learning.
-Regarding claim 6, Munoz Delgado in view of Li teaches the method of claim 2.
Munoz Delgado does not disclose wherein the extracting of the at least one image component from an input image of the at least two input images includes merging areas of a segmented foreground which are associated with an object of the input image to form a texture map, the at least one image component including a texture.
In the same field of endeavor, Li teaches a method to train neural networks for object recognition (Li: Abstract; Figures 1-5). Li further teaches generating counterfactual image data including at least one counterfactual image based on the input image data (Li: Figure 2, training images) wherein the extracting of the at least one image component from an input image of the at least two input images includes merging areas of a segmented foreground which are associated with an object of the input image to form a texture map, the at least one image component including a texture (Li: Figure 2).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Munoz Delgado with the teaching of Li by labeling counterfactual image in order to provide training data for self-monitored learning.
-Regarding claim 8, Munoz Delgado in view of Li teaches the method of claim 2. The modification further teaches wherein the generating of the counterfactual image data includes: merging image components, at least two of the image components originating from input image data different from one another, to form the at least one counterfactual image (Munoz Delgado: [0124]; equation (1); See also Li: Figure 2).
-Regarding claim 11, Munoz Delgado discloses a device for generating training data for machine learning, (Abstract; [0040], “providing training sensor data samples of a training dataset, training the neural network”; [0041], “augmented training dataset”, FIG. 1, neural network 106, system 106; [0075]), comprising: at least one processor; at least one memory; and at least one interface (FIG. 1, processor 104, controller 103; [0049]; Note: an interface has to be included in order take input samples or training data to controller as shown in FIGS. 1-2, 4 of Munoz Delgado); wherein the device is configured to (FIGS. 1-6): provide input image data including at least two input images different from one another (FIG. 1, image sources 108; [0067]; FIG. 4, input data samples 402); generate counterfactual image data including at least one counterfactual image based on the input image data ([0029]; FIG. 1, system 106; [0064], “a counterfactual generation system”; [0066]; FIG. 2, system 205; FIG. 4, process 403; [0121]);
Munoz Delgado does not disclose generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images; and providing the labeled image details as training data for the machine learning.
In the same field of endeavor, Li teaches a method to train neural networks for object recognition (Li: Abstract; Figures 1-5). Li further teaches generating counterfactual image data including at least one counterfactual image based on the input image data (Li: Figure 2, training images); generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images (Li: Figure 2, label Assignment & Results; Page 3, 2nd paragraph, “Label assignment”); and providing the labeled image details as training data for the machine learning (Li: Abstract; Figure 2,caption; Page 5, Sec. 3, 1st paragraph; Note: it is known that labels are used for training purpose).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Munoz Delgado with the teaching of Li by labeling counterfactual image in order to provide training data for self-monitored learning.
-Regarding claim 12, Munoz Delgado discloses a non-transitory computer-readable medium on which is stored a computer program including computer-readable instructions (FIG. 1: processor 103, memory 105) for generating training data for machine learning, including self-monitored learning (Abstract; [0040], “providing training sensor data samples of a training dataset, training the neural network”; [0041], “augmented training dataset”, FIG. 1, neural network 106, system 106; [0075]), the instruction, when executed by a computer, causes the computer ([0049]) to perform the following steps (FIGS. 1-6): providing input image data including at least two input images different from one another (FIG. 1, image sources 108; [0067]; FIG. 4, input data samples 402); generating counterfactual image data including at least one counterfactual image based on the input image data ([0029]; FIG. 1, system 106; [0064], “a counterfactual generation system”; [0066]; FIG. 2, system 205; FIG. 4, process 403; [0121]);
Munoz Delgado does not disclose generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images; and providing the labeled image details as training data for the machine learning.
In the same field of endeavor, Li teaches a method to train neural networks for object recognition (Li: Abstract; Figures 1-5). Li further teaches generating counterfactual image data including at least one counterfactual image based on the input image data (Li: Figure 2, training images); generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images (Li: Figure 2, label Assignment & Results; Page 3, 2nd paragraph, “Label assignment”); and providing the labeled image details as training data for the machine learning (Li: Abstract; Figure 2,caption; Page 5, Sec. 3, 1st paragraph; Note: it is known that labels are used for training purpose).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Munoz Delgado with the teaching of Li by labeling counterfactual image in order to provide training data for self-monitored learning.
-Regarding claim 13, Munoz Delgado discloses a self-monitored learning method, the method comprising: training a neural network using training data, the training data being generated by (Abstract; [0040], “providing training sensor data samples of a training dataset, training the neural network”; [0041], “augmented training dataset”, FIG. 1, neural network 106, system 106; [0075]; FIGS. 1-6): providing input image data including at least two input images different from one another (FIG. 1, image sources 108; [0067]; FIG. 4, input data samples 402); generating counterfactual image data including at least one counterfactual image based on the input image data ([0029]; FIG. 1, system 106; [0064], “a counterfactual generation system”; [0066]; FIG. 2, system 205; FIG. 4, process 403; [0121]);
Munoz Delgado does not disclose generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images; and providing the labeled image details as training data.
In the same field of endeavor, Li teaches a method to train neural networks for object recognition (Li: Abstract; Figures 1-5). Li further teaches generating counterfactual image data including at least one counterfactual image based on the input image data (Li: Figure 2, training images); generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images (Li: Figure 2, label Assignment & Results; Page 3, 2nd paragraph, “Label assignment”); and providing the labeled image details as training data (Li: Abstract; Figure 2,caption; Page 5, Sec. 3, 1st paragraph; Note: it is known that labels are used for training purpose).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Munoz Delgado with the teaching of Li by labeling counterfactual image in order to provide training data for self-monitored learning.
-Regarding claim 14, Munoz Delgado discloses a device configured to train a neural network, the device being configured to: train the neural network using training data, the training data being generated by (Abstract; [0040], “providing training sensor data samples of a training dataset, training the neural network”; [0041], “augmented training dataset”, FIG. 1, neural network 106, system 106; [0075]; FIGS. 1-6): providing input image data including at least two input images different from one another (FIG. 1, image sources 108; [0067]; FIG. 4, input data samples 402); generating counterfactual image data including at least one counterfactual image based on the input image data ([0029]; FIG. 1, system 106; [0064], “a counterfactual generation system”; [0066]; FIG. 2, system 205; FIG. 4, process 403; [0121]);
Munoz Delgado does not disclose generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images; and providing the labeled image details as training data.
In the same field of endeavor, Li teaches a method to train neural networks for object recognition (Li: Abstract; Figures 1-5). Li further teaches generating counterfactual image data including at least one counterfactual image based on the input image data (Li: Figure 2, training images); generating labeled image details by labeling at least one image detail of a counterfactual image of the at least one counterfactual image and at least one further image detail of another image different therefrom including another counterfactual image of the at least one counterfactual image or an input image of the at least two images (Li: Figure 2, label Assignment & Results; Page 3, 2nd paragraph, “Label assignment”); and providing the labeled image details as training data for the machine learning (Li: Abstract; Figure 2,caption; Page 5, Sec. 3, 1st paragraph; Note: it is known that labels are used for training purpose).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Munoz Delgado with the teaching of Li by labeling counterfactual image in order to provide training data for self-monitored learning.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Munoz Delgado (US 20210089895 A1) in view of Li et al (arXiv:2010.05981 2021), hereinafter Li, and further in view of Qin et al (PATTERN RECOGNITION, VOL. 106, PAGES 1-15, 2020), hereinafter Qin.
-Regarding claims 4, Munoz Delgado in view of Li teaches the method of claim 2.
Munoz Delgado in view of Li does teach wherein the extracting of the at least one image component from the input image includes extracting an object shape of an object in the input image (Li: Figure 2). Munoz Delgado in view of Li does not teach the extracting being carried out using at least one binary mask having a salience detector, for segmenting a foreground represented in the input image, which is associated with the object represented in the input image.
However, Qin is an analogous art pertinent to the problem to be solved in this application and teaches a method for salient object detection (Qin: Abstract; Figures 1-7). Qin further teaches the extracting being carried out using at least one binary mask having a salience detector, for segmenting a foreground represented in the input image, which is associated with the object represented in the input image (Qin: Figures 5-7; Page 10, 1st Col., 2nd paragraph, “segments all the regions … shape of the target”, 2nd Col., “backgrounds … foreground appearance”; Page 6, 1st Col., Sec 4.2., 1st paragraph, 2nd Col., paragraph (6), “binary mask”).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Munoz Delgado in view of Li with the teaching of Qin by using a salience detector in order to more accurate detect object shape with a faster and smaller model.
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Munoz Delgado (US 20210089895 A1) in view of Li et al (arXiv:2010.05981 2021), hereinafter Li, and further in view of Qin et al (PATTERN RECOGNITION, VOL. 106, PAGES 1-15, 2020), hereinafter Qin, in view of Wang et al (IEEE Transactions on Image Processing, Vol. 28, Issue 6 2019), hereinafter Wang.
-Regarding claims 5, Munoz Delgado in view of Li, and further in view of Qin teaches the method of claim 2.
Munoz Delgado in view of Li, and further in view of Qin does teach wherein the at least one binary mask includes a binary shape mask (Qin: Figure 7). Munoz Delgado in view of Li, and further in view of Qin does not teach wherein the at least one binary mask includes a binary edge mask.
However, Wang is an analogous art pertinent to the problem to be solved in this application and teaches a method for salient object detection by jointly learning to segment salient object masks and detect salient object boundaries. Wang further teaches wherein the at least one binary mask includes a binary edge mask (Wang: Figs. 1-2).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Munoz Delgado in view of Li, and further in view of Qin with the teaching of Wang by using both a binary edge mask and a binary shape mask in order to achieve more accurate salient object detection.
Claim(s) 7 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Munoz Delgado (US 20210089895 A1) in view of Li et al (arXiv:2010.05981 2021), hereinafter Li, and further in view of Sauer et al (arXiv:2101.06046v1 2021), hereinafter Sauer.
-Regarding claim 7, Munoz Delgado in view of Li teaches the method of claim 2.
Munoz Delgado in view of Li does teach wherein at least one first image component, which includes an object shape (foreground) (Li: Figure 2).
Munoz Delgado in view of Li does not a component including a background.
However, Sauer is an analogous art pertinent to the problem to be solved in this application and teaches a method for generating counterfactual images (Sauer: Abstract; Figures 1-5). Sauer further teaches teach one image component including a background (Sauer: Page 1, last paragraph, “decompose this process into separate IMs (independent mechanisms) … one generates the object’s shape, the second generates the object’s texture, and the third generates the background. With access to these IMs, we can produce counterfactual images, i.e., images of unseen combinations of FoVs”; Figures 1-2; equation (2)).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Munoz Delgado in view of Li with the teaching of Sauer by using background component from an image in order to generate counterfactual images with improved performance for training of machine learning.
-Regarding claim 9, Munoz Delgado in view of Li teaches the method of claim 8.
Munoz Delgado in view of Li does teach wherein at least one first image component, which includes an object shape, and another image component, which includes a texture are merged (Li: Figure 2).
Munoz Delgado in view of Li does not teach merging another image component, which includes a background and/or is associated with the background.
However, Sauer is an analogous art pertinent to the problem to be solved in this application and teaches a method for generating counterfactual images (Sauer: Abstract; Figures 1-5). Sauer further teaches teach merging another image component, which includes a background and/or is associated with the background (Sauer: Page 1, last paragraph, “consider three IMs: one generates the object’s shape, the second generates the object’s texture, and the third generates the background. With access to these IMs, we can produce counterfactual images, i.e., images of unseen combinations of FoVs”; Figure 2; equation (2)).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Munoz Delgado in view of Li with the teaching of Sauer by merging shape, texture and background from different images in order to generate counterfactual images with improved performance for training of machine learning.
Allowable Subject Matter
Claim 10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant's arguments filed on 10/23/2025 have been fully considered but they are not persuasive. Applicant argues “the labeling of the claimed invention is performed on counterfactual images, and the Patent Office simply assumes, without establishing, that the blended image produced by style transfer is a counterfactual image” and “the labeling of Li is performed only on a single image, namely, the blended image that is produced by the style transfer performed on input images, but the claim recites labeling of two distinct images” (Remarks: page 11, 2nd paragraph).
Regarding claim 1 and in response to applicant’s argument that the labeling of the claimed invention is performed on counterfactual images, and the Patent Office simply assumes, without establishing, that the blended image produced by style transfer is a counterfactual image, there is no standard definition about counterfactual image. The claim and the specification do not provide a definition of counterfactual image. The plaining meaning of a counterfactual image is an image that preserves anatomical shape and foreign objects from input image. In this case, Li’s Figure. 2(a) shows generated image with the image of chimpanzee shape but with lemon texture and it is labelled as chimpanzee (Li: page 3, 2nd paragraph; Figure 2
PNG
media_image2.png
615
772
media_image2.png
Greyscale
). Thus, the labeled images based on Li’s three models are counterfactual images according to plain meaning of counterfactual image.
In response to applicant’s argument that the labeling of Li is performed only on a single image, namely, the blended image that is produced by the style transfer performed on input images, but the claim recites labeling of two distinct images, Li shows label assignments based on three different models to generate three distinct images (Li: Figure 2; page 3, 2nd paragraph, section Label assignment).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAO LIU/Primary Examiner, Art Unit 2664