DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means,” and therefore are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “means for obtaining,” “means for generating,” “means for processing,” and “means for … fine-tuning” in claim 30.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-8, 11-21, and 24-30 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Prabhu et al. (“AUGO: Augmentation Consistency-guided Self-training for Source-free Domain Adaptive Semantic Segmentation,” arXiv:2107.10140v2 [cs.CV] 6 Jan 2022), hereinafter referred to as Prabhu.
Regarding claims 1, 14, and 27, Prabhu teaches a processor-implemented method, apparatus, and non-transitory computer-readable medium for processing one or more images, the method, apparatus, and medium comprising:
at least one memory (Prabhu §7.1 & pg. 12 right column teaches that the method was performed using a code, requiring a computer with at least one processor and memory); and
at least one processor coupled to the at least one memory (Prabhu §7.1 & pg. 12 right column discussed above);
obtaining an unlabeled image (Prabhu pg. 1 left column: “our goal is to adapt a trained semantic segmentation model to a new target domain given only its trained parameters and unlabeled target data”);
generating at least one transformed image based on the unlabeled image (Prabhu pg. 2 left column: “we generate two views of each target image that vary in scale, spatial context, and color statics via a simple crop, resize, and color jitter strategy”);
processing the unlabeled image using a pre-trained semantic segmentation model to generate a first segmentation output (Prabhu pg. 2 left column: “Augmented Consistency-guided Self-training (AUGO), a simple source-free adaptation algorithm for semantic segmentation”; Prabhu Fig. 2; Prabhu pg. 3 right column: “we pass the original image, Xτ, through the current model, h, to produce an output probabilistic prediction”);
processing the at least one transformed image using the pre-trained semantic segmentation model to generate at least a second segmentation output (Prabhu pg. 2 & Fig. 2 discussed above; Prabhu pg. 3 right column: “This jittered, cropped, and resized image is then passed through the model to produce a probabilistic output”); and
based on the first segmentation output and at least the second segmentation output, fine-tuning one or more parameters of the pre-trained semantic segmentation model (Prabhu pg. 3 right column: “We thus obtain aligned predictive views … further refine predictions in both views via flip ensembling”; Prabhu pg. 4 right column: “we update model parameters via self-training … optimizing all model parameters cause the model to rapidly diverge from its original task. To address this, we update only the model’s batch-norm parameters”).
Regarding claims 2, 15, and 28, Prabhu teaches the processor-implemented method, apparatus, and non-transitory computer-readable medium of claims 1, 14, and 27, wherein the at least one transformed image generated by applying one or more photometric transformations to the unlabeled image (Prabhu pg. 2 left column discussed above teaches color jitter).
Regarding claims 3 and 16, Prabhu teaches the processor-implemented method and apparatus of claims 2 and 15, wherein the one or more photometric transformations comprises at least one of a grayscale adjustment, a color adjustment, a color jitter, or a blur effect (Prabhu pg. 2 left column discussed above teaches color jitter; also see Prabhu pg. 3 right column: “we first modify image appearance by applying a pixel-level color jitter”).
Regarding claims 4 and 17, Prabhu teaches the processor-implemented method and apparatus of claims 1 and 14, wherein the at least one transformed image is generated by applying one or more geometric transformations to the unlabeled image (Prabhu pg. 2 left column discussed above teaches cropping and resizing).
Regarding claims 5 and 18, Prabhu teaches the processor-implemented method and apparatus of claims 4 and 17, wherein the one or more geometric transformations comprise at least one or a rotation, a crop, or a shuffling of pixels (Prabhu pg. 2 left column discussed above teaches cropping and resizing; also see Prabhu pg. 3 right column: “cropped using the random bounding box coordinates and resized to the original output image size … use the same bounding box coordinates to extract a cropped image region and resize that region to the original image size to produce a rescaled image”).
Claim 29 is rejected using the same rationale as applied to claims 3-5 discussed above.
Regarding claims 6 and 19, Prabhu teaches the processor-implemented method and apparatus of claims 1 and 14, wherein fine-tuning of the one or more parameters of the pre-trained semantic segmentation model enforces the pre-trained semantic segmentation model to be at least one of invariant to photometric transformations or equivariant to geometric transformations (Prabhu pg. 3 right column-pg. 4 left column: “Pixel-level predictive consistency … We note that while invariance across augmented views has been used extensively in prior work … we instead propose using such predictive consistency to detect reliable predictions on which to self-train”; Prabhu Fig. 2: “reliable pixels predictions for self-training are identified based on pixel-level consistency across aligned predictions and class-conditioned confidence thresholding”).
Regarding claims 7 and 20, Prabhu teaches the processor-implemented method and apparatus of claims 1 and 14, further comprising:
determining at least one loss between features of the pre-trained semantic segmentation model based on generation of the first segmentation output and features of the pre-trained semantic segmentation model based on generation of at least the second segmentation output (Prabhu pg. 4 right column: “we train using a standard self-training cross-entropy loss on the predicted pseudolabel … employ log-inverse frequency loss-weighting … then minimize a cross-entropy loss”; Prabhu Eq. (2));
wherein the one or more parameters of the pre-trained semantic segmentation model are fine-tuned based on the at least one loss (Prabhu pg. 4 right column discussed above teaches loss-weighting; Prabhu pg. 7 right column: “loss weighting further improves to 45.89”).
Regarding claims 8 and 21, Prabhu teaches the processor-implemented method and apparatus of claims 7 and 20, wherein the features include probability values output by the pre-trained semantic segmentation model (Prabhu pg. 3 right column discussed above teaches producing an output probabilistic prediction).
Regarding claims 11 and 24, Prabhu teaches the processor-implemented method and apparatus of claims 1 and 14, further comprising:
generating a trained model based on fine-tuning of the one or more parameters of the pre-trained semantic segmentation model (Prabhu pg. 3 right column, pg. 4 right column, & pg. 7 right column discussed above).
Regarding claims 12 and 25, Prabhu teaches the processor-implemented method and apparatus of claims 1 and 14, wherein obtaining the unlabeled image comprises:
receiving a plurality of images (Prabhu Figs. 2-3).
Regarding claims 13 and 26, Prabhu teaches the processor-implemented method and apparatus of claims 1 and 14, wherein the unlabeled image does not include a label or a pseudo-label (Prabhu Abstract: “given only unlabeled target data”).
Regarding claim 30, Prabhu teaches an apparatus comprising means for performing the processes described in claims 1 and 14. Therefore, claim 30 is rejected using the same rationale as applied to claims 1 and 14 discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 9, 10, 22, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Prabhu et al. (“AUGO: Augmentation Consistency-guided Self-training for Source-free Domain Adaptive Semantic Segmentation,” arXiv:2107.10140v2 [cs.CV] 6 Jan 2022), in view of Yang et al. (US 2023/0044969 A1), hereinafter referred to as Prabhu and Yang, respectively.
Regarding claims 9 and 22, Prabhu teaches the processor-implemented method and apparatus of claims 7 and 20, wherein the features include normalized log inverse output by the pre-trained semantic segmentation model (Prabhu pg. 4 right column: “We compute a per category loss weight λc based on its normalized log inverse frequency”).
However, Prabhu does not appear to explicitly teach that the features include logits output by the pre-trained semantic segmentation model.
Pertaining to the same field of endeavor, Yang teaches that the features include logits output by the pre-trained semantic segmentation model (Yang ¶¶0063: “The segmentation prediction output is sigmoid logits”).
Prabhu and Yang are considered to be analogous art because they are directed to semantic segmentation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the adaptive semantic segmentation model (as taught by Prabhu) to use logits (as taught by Yang) because the combination allows the features to be refined in a coarse-to-fine process and allows extraction/incorporation of temporal information from different resolutions (Yang ¶¶0064).
Regarding claims 10 and 23, Prabhu teaches the processor-implemented method and apparatus of claims 7 and 20, but does not appear to explicitly teach the at least one loss includes at least one of an L1 loss or an L2 loss.
Pertaining to the same field of endeavor, Yang teaches that the at least one loss includes at least one of an L1 loss or an L2 loss (Yang ¶¶0088: “Moreover, a variety of loss terms may be used to train the video matting model. L1 loss and Laplacian loss may be used on the matting mask”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the adaptive semantic segmentation model (as taught by Prabhu) to use L1 or L2 loss (as taught by Yang) because the combination provides more consistency and improves accuracy (Yang ¶¶0088).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SOO J SHIN whose telephone number is (571)272-9753. The examiner can normally be reached M-F; 10-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Soo Shin/Primary Examiner, Art Unit 2667