Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see page 8, lines 1-8, filed 03/11/2026, with respect to the terms “reflecting a night feature” in claim 1 and “predetermined reference value” in claims 8 and 9 have been fully considered and are persuasive. The rejections under 35 U.S.C 112 (a) and 112 (b) of claims 1-20 have been withdrawn.
Applicant's arguments filed on 03/11/2026 have been fully considered but they are not persuasive. Although applicant asserts that “strength” may correspond to brightness or luminance, the specification does not provide a clear definition or consistent usage of the term “strength” that would inform a person of ordinary skill in the art of its scope. The term “strength” is used in a broad and ambiguous manner and could refer to various image attributes (e.g., brightness, contrast, intensity, or other visual characteristics), and the specification does not clearly limit the term to any particular parameter. Therefore, the rejections for claims 1, 4-11, and 14-20 under 35 U.S.C 112 (a) and 112 (b) are maintained.
Additionally, applicant argues that the cited references fail to disclose the following limitations: discriminating whether the night composite image is real or fake, and correcting a parameter so that the generated image is discriminated as real. However, Lee discloses a generative adversarial network (GAN) framework including both a generator and a discriminator trained competitively, stating that “the structure consists of a network for generating the image and a network for determining the authenticity of the image” (Introduction, pg. 2). Further, Lee discloses adversarial training, stating that “discriminator D and generator G are then trained competitively,” (CycleGAN-Based Day-to-Nighttime Image Translation, pg. 5) and defines an adversarial loss function (CycleGAN-Based Day-to-Nighttime Image Translation, Equation 1, pg. 5). Such adversarial training includes discriminating whether generated data is real or fake via the discriminator, and updating the generator parameters based on the discriminator output to improve realism. Thus, in GAN training, parameters are iteratively updated via backpropagation to minimize adversarial loss and increase the likelihood that generated outputs are classified as real. Therefore, the rejections for claims 1, 4-11, and 14-20 under 35 USC § 103 are maintained.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 4-11, and 14-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventors, at the time the application was filed, had possession of the claimed invention. Claim 1 recites limitations involving “strength” of a daytime road image and “degrading” that strength; however, the specification does not describe what the term “strength” represents or how its value is determined. The specification does not convey whether strength refers to brightness, intensity, contrast, exposure, or another measurable attribute of the image. Likewise, “degrading strength” is not supported by a clear algorithm, parameter definition, or specific operation tied to image processing. Accordingly, claim 11 is rejected for containing identical subject matter to claim 1. Furthermore, claims 4-10 and 14-20 depend from claims 1 and 11, respectively, and are rejected for the same reasons set forth for claims 1 and 11.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 4-11, and 14-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “strength” in claim 1 is a relative term which renders the claim indefinite. The term “strength” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The claim and specification do not identify what property constitutes the image’s “strength,” how it is measured, or what degree of change represents “degrading” that strength. Accordingly, claim 11 is rejected for containing identical subject matter to claim 1. Furthermore, claims 4-10 and 14-20 depend from claims 1 and 11, respectively, and are rejected for the same reasons set forth for claims 1 and 11.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5-10, 11, and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et. al (“Nighttime Data Augmentation Using GAN for Improving Blind-Spot Detection”) in view of Yi et. al (“DualGAN: Unsupervised Dual Learning for Image-to-Image Translation”).
Regarding Claim 1, Lee teaches an apparatus for generating training data (Fig. 2 (shown below)), the apparatus comprising:
PNG
media_image1.png
831
1123
media_image1.png
Greyscale
Abstract: “Therefore, we propose a framework that converts daytime images into synthetic nighttime images using a generative adversarial network and that augments the synthetic images for the training process of the vehicle detector.”
a communication device configured to receive a daytime road image;
B. CycleGAN-Based Day-to-Nighttime Image Translation, pg. 5: “The daytime side-rectilinear images were fed to the trained generator to generate synthetic nighttime images augmented by the vehicle detector, as shown in Phase 2 in Fig. 2.”
a memory storing a training data generation model for generating a night composite image from the daytime road image;
A. Overview, pg. 4: “Synthetic nighttime side-rectilinear images were generated by using daytime side-rectilinear images with the trained CycleGAN generator.”
A. Qualitative Evaluation of Day-to-Night Image Translation, pg. 7: “Training was performed on an NVIDIA GTX 1080Ti with 11 GB memory.”
and a processor operatively connected to the communication device and the memory and configured to degrade strength of the daytime road image based on that the daytime road image is input as an input value to the training data generation model and to reflect a night feature to generate the night composite image (Fig. 7 (shown below));
PNG
media_image2.png
445
1079
media_image2.png
Greyscale
B. CYCLEGAN-BASED DAY-TO-NIGHTTIME IMAGE, pg. 5: “Even though the images had been acquired from cameras installed at different locations on the vehicle, the style changes were similar to that of the daytime and nighttime images consisting of roads, vehicles, and background.”
Explanation: The GAN night-translation process darkens daytime images and adds nighttime characteristics (headlight reflections, glare, low illumination), as shown in Figure 7 above.
a generator configured to degrade the strength of the daytime road image and reflect the night feature to generate the night composite image (see Lee: Figs. 2 and 7 (shown above));
and a discriminator configured to learn a difference between the night composite image and a night reference image to discriminate the night composite image (see Lee: Fig. 2 (shown above)).
wherein the processor is configured to transform the daytime road image received to generate the training data for machine learning into the night composite image to generate the training data,
A. Overview, pg. 4: “Synthetic nighttime side-rectilinear images were generated by using daytime side-rectilinear images with the trained CycleGAN generator.”
discriminate whether the night composite image is real data or fake data by the discriminator (CycleGAN-Based Day-to-Nighttime Image Translation, Equation 1, pg. 5),
Explanation: Mathematical GAN objective directly performs real vs fake classification.
and wherein the generator is configured to: correct a parameter so that the night composite image is discriminated as the real data by the discriminator, to generate the night composite image, when the night composite image is discriminated as the fake data (CycleGAN-Based Day-to-Nighttime Image Translation, Equation 1, pg. 5).
CycleGAN-Based Day-to-Nighttime Image Translation, pg. 5: “Discriminator D and generator G are then trained competitively. Our objective function consists of two losses: adversarial and cycle consistency. The adversarial loss is expressed as...”
Explanation: Adversarial training = parameter correction to fool discriminator.
Lee fails to teach a first generator configured to degrade the strength of the daytime road image and a second generator configured to reflect the night feature in the daytime road image.
However, Yi teaches a two-generator architecture, stating that “the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task” (Yi: Abstract). In the context of the day-to-night translation task demonstrated (Yi: Figs. 2 and 11), domain U corresponds to daytime images and domain V corresponds to nighttime images, and generator GA: U [Wingdings font/0xE0] V performs the translation from day to night. Because GA converts high-illumination daytime images into low-illumination nighttime images (see Yi Figs. 2 and 11), where the daytime brightness is reduced and nighttime tone is applied, GA functions as the first generator configured to degrade the strength of the daytime road image. Meanwhile, the second generator GB: V [Wingdings font/0xE0] U processes the output of GA and enforces nighttime feature consistency through Yi’s adversarial and reconstruction losses, stating that “image u ∈ U is translated to domain V using GA…GA(u, z) is then translated back to domain U using GB, which outputs GB(GA(u, z), z0) as the reconstructed version of u” (Yi: 3. Method, pg. 3). Yi explains that “the translated samples obey the domain distribution” (Yi: 3.1. Objective, pg. 3), and that the dual-generator cycle “preserves content structures in the inputs and capture features (e.g., texture, color, and/or style) of the target domain” (Yi: 5. Qualitative evaluation, pg. 5), meaning GB ensures that GA’s output contains the proper night features by applying the inverse domain mapping and cycle consistency loss.
Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Lee’s apparatus to incorporate Yi’s two-generator architecture. Yi explains that using paired generators “enables low-level information to be shared between input and output, which is beneficial since many image translation problems implicitly assume alignment between image structures in the input and output (e.g., object shapes, textures, clutter, etc.)” (Yi: 3.2. Network configuration, pg. 3), and “strengthens feedback signals that encodes the targeted distribution” (Yi: 5. Qualitative evaluation, pg. 5). A person of ordinary skill in the art would have been motivated to incorporate this two-generator architecture because Yi demonstrates that using two complementary generators improves realism, strengthens domain translation, and preserves content, especially for day to night translation.
Regarding Claim 5, Lee in view of Yi teaches the apparatus of claim 1, and Yi further teaches that the processor is further configured to input the daytime road image to the first generator to degrade the strength and to input the daytime road image, the strength of which is degraded, to the second generator to generate the night composite image.
Yi performs sequential routing of an image through two generators (see Yi: 3. Method (shown above)), which is a first generator-then-second generator pipeline.
Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the apparatus of claim 1 to incorporate Yi’s sequential pipeline. Yi explains that this dual-stage generator improves translation quality, stating that “we attribute the improvements to the reconstruction loss, which forces the inputs to be reconstructable from outputs through the dual generator and strengthens feedback signals that encodes the targeted distribution” (Yi: 5. Qualitative evaluation, pg. 5). A person of ordinary skill in the art would have been motivated to incorporate this first generator-then-second generator pipeline into the apparatus of claim 1 to improve domain translation, thereby enhancing night composite realism by incorporating dual-stage refinement.
Regarding Claim 6, Lee in view of Yi teaches the apparatus of claim 1, and Yi further teaches that the processor is further configured to input the daytime road image to the second generator to reflect the night feature and to input the daytime road image in which the night feature is reflected to the first generator to generate the night composite image.
Yi teaches that the two generators operate as bidirectional translators whose order reverses depending on the desired mapping direction, stating that “the dual GAN learns to invert the task” (Yi: Abstract), and that the “DualGAN simultaneously learns two reliable image translators from one domain to the other” (Yi: Introduction, pg. 2). Yi demonstrates the reverse generator order where “v ∈ V is translated to U as GB(v, z0) and then reconstructed as GA(GB(v, z0), z)” (Yi: 3. Method, pg. 3), showcasing the architectures ability to apply the second generator first and the first generator second.
Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the apparatus of claim 1 to incorporate this second generator-then-first generator pipeline. Yi explains that training both directions is necessary to maintain consistency and produce realistic target outputs, stating that “DualGAN trains both primal and dual GANs at the same time, allowing a reconstruction error term to be used to generate informative feedback signals” (Yi: 2. Related work, pg. 2). A person of ordinary skill in the art would have been motivated to adopt the reversed generator sequence because Yi teaches that applying the generators in the opposite order improves training stability, improves domain alignment, and provides stronger supervisory signals to better match the target domain, thereby ensuring consistency and improving night-domain fidelity.
Regarding Claim 7, Lee in view of Yi teaches the apparatus of claim 1, and Yi further teaches that the processor is further configured to input the daytime road image to the first generator and the second generator at a same time to generate the night composite image.
Yi teaches simultaneous operation of both generators, stating that “DualGAN trains both primal and dual GANs at the same time” (Yi: 2. Related work, pg. 2). Additionally, Algorithm 1 shows that the system updates both generators in the same iteration: “update θA, θB to minimize….” (Yi: 3.3. Training procedure, pg. 4).
Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the apparatus of claim 1 to incorporate this simultaneous multi-generator processing. Yi explains the benefit of this simultaneous training, noting that it “enables the discriminators to provide more reliable gradient information” (Yi: 3.3. Training procedure, pg. 4). A person of ordinary skill in the art would have been motivated to adopt this simultaneous generator operation to improve training stability and night image realism.
Regarding Claim 8, Lee in view of Yi teaches the apparatus of claim 1, but fails to teach that the processor is further configured to transform the daytime road image so that the generator reduces an error between the night composite image and the night reference image, to generate the night composite image, upon concluding that the error is greater than a predetermined reference value.
However, Yi teaches both adversarial and reconstruction errors. Specifically, Yi states that the discriminator “is trained with v as positive samples and GA (u, z) as negative examples, whereas DB takes u as positive and GB (v, z0) as negative” (Yi: 3. Method, pg. 3), making real night images act as reference images. Yi further teaches minimizing reconstruction error, stating that “Generators GA and GB are optimized to emulate “fake” outputs to blind the corresponding discriminators DA and DB, as well as to minimize the two reconstruction losses” (Yi: 3. Method, pg. 3).
Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the apparatus of claim 1 to incorporate error reduction between generated night images and real night images. Yi explains that this loss “strengthens feedback signals that encodes the targeted distribution” (Yi: 5. Qualitative evaluation, pg. 5) and “forces the translated samples to obey the domain distribution” (Yi: 3.1. Objective, pg. 3). A person of ordinary skill in the art would have been motivated to adopt this error reduction to improve nighttime realism and domain distribution.
Regarding Claim 9, Lee in view of Yi teaches the apparatus of claim 1, but fails to teach that the processor is further configured to determine the night composite image as the training data, upon concluding that an error between the night composite image and the night reference image is less than or equal to a predetermined reference value.
However, Yi teaches iterative optimization until convergence, Algorithm 1 repeats training until convergence (see Step 11, pg. 4). Yi also states that reconstruction and adversarial errors must be reduced “to minimize the two reconstruction losses” (Yi: 3. Method, pg. 3). Convergence in GAN training corresponds to errors dropping below an acceptable threshold.
Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the apparatus of claim 1 to incorporate this threshold-based acceptance. Yi motivates this behavior by stating that minimizing reconstruction errors produces outputs that “better preserve content structures in the inputs and capture features (e.g., texture, color, and/or style) of the target domain” (Yi: 5. Qualitative evaluation, pg. 5). A person of ordinary skill in the art would have been motivated to adopt threshold-based acceptance in Lee to ensure high-quality night composite images.
Regarding Claim 10, Lee in view of Yi teaches the apparatus of claim 9, and Yi further teaches that the processor is further configured to reflect label information included in the daytime road image in the night composite image to generate the training data.
Yi teaches preserving structural and semantic information, stating that DualGAN “enables low-level information to be shared between input and output, which is beneficial since many image translation problems implicitly assume alignment between image structures in the input and output (e.g., object shapes, textures, clutter, etc.)” (Yi: 3.2. Network configuration, pg. 3). Yi further states that “DualGAN faithfully preserves the structures in the label images” (Yi: Fig. 3).
Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the apparatus of claim 9 to incorporate label information. A person of ordinary skill in the art would be motivated to preserve semantic and label information in the apparatus of claim 9 to maintain structural fidelity during translation, as Yi shows that such preservation improves realism and reduces artifacts.
Regarding Claim 11, Lee in view of Yi teaches all of the limitations of Claim 1 above because Claim 11 recites a method comprising steps that correspond in substance to the functions of the apparatus of Claim 1.
Regarding Claim 15, Lee in view of Yi teaches the method of claim 11, and additional limitations are met as in the consideration of Claim 5 above.
Regarding Claim 16, Lee in view of Yi teaches the method of claim 11, and additional limitations are met as in the consideration of Claim 6 above.
Regarding Claim 17, Lee in view of Yi teaches the method of claim 11, and additional limitations are met as in the consideration of Claim 7 above.
Regarding Claim 18, Lee in view of Yi teaches the method of claim 11, and additional limitations are met as in the consideration of Claim 8 above.
Regarding Claim 19, Lee in view of Yi teaches the method of claim 11, and additional limitations are met as in the consideration of Claim 9 above.
Regarding Claim 20, Lee in view of Yi teaches the method of claim 19, and additional limitations are met as in the consideration of Claim 10 above.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et. al in view of Yi et. al, further in view of Tan and Isa (“Exposure Based Multi-Histogram Equalization Contrast Enhancement for Non-Uniform Illumination Images”).
Regarding Claim 4, Lee in view of Yi teaches the apparatus of claim 1, but fails to teach that the first generator is further configured to convert the daytime road image into a grayscale image and extract a histogram vector from the grayscale image, to extract a strength degradation parameter from the histogram vector, and to reflect the strength degradation parameter in the daytime road image to degrade the strength of the daytime road image.
However, Tan and Isa teach illumination-modification techniques that include converting an image to a luminance/grayscale representation, extracting histogram-based vectors, and computing parameters from those histogram vectors to adjust brightness. Tan and Isa explain that histogram-based luminance processing focuses on grayscale intensity, stating that “Histogram Equalization (HE) is one of the methods that is developed to satisfy human visual system which focuses on luminance rather than color information,” (Tan and Isa: Introduction, pg. 2) and uses “dynamic gray level allocation” (Tan and Isa: Fig. 4) to analyze illumination distribution. Tan and Isa further teach extracting histogram-based parameters, stating that “ERMHE uses exposure region-based histogram segmentation thresholds to segment the original histogram into sub-histograms” (Tan and Isa: Abstract). Tan and Isa further teach that the method derives an entropy-controlled parameter from those histogram vectors to determine how much gray-level allocation should shift, stating that “ERMHE introduces the use of an entropy-controlled parameter, α in factor computation to balance the gray level allocation between low frequency bins with large sub-histogram span” (Tan and Isa: ENTROPY-CONTROLLED DYNAMIC GRAY LEVEL RANGE ALLOCATION SCHEME, pg. 9). Finally, Tan and Isa teach reflecting the histogram-derived parameter back into the image to modify (including reduce) strength by remapping gray levels, stating that “With the range computed for each sub-histogram, ERMHE then performs Histogram Segmentation Threshold Redefinition and Equalization…the total gray level (L−1) is to be multiplied with an average factor…and in order to improve the contrast, the spike must be flattened across a larger interval of gray levels” (Tan and Isa: ENTROPY-CONTROLLED DYNAMIC GRAY LEVEL RANGE ALLOCATION SCHEME, pg. 9, and HISTOGRAM SEGMENTATION THRESHOLD REDEFINITION AND EQUALIZATION, pg. 9).
Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the apparatus of claim 1 to incorporate the histogram-based luminance-adjustment techniques of Tan and Isa because all three references address the same general problem of illumination transformation in digital images. Lee and Yi seek to convert daytime road scenes into nighttime scenes by degrading brightness and modifying illumination characteristics, while Tan and Isa provide a well-known, computationally efficient technique for extracting grayscale intensity distributions and histogram-derived parameters for controlled strength (brightness) modification in images with non-uniform illumination. A person of ordinary skill in the art would have recognized that integrating Tan and Isa’s histogram-based luminance parameter extraction into Lee and Yi’s generator would provide more precise control over brightness degradation, yielding improved realism and image consistency. Applying Tan and Isa’s histogram-derived degradation parameter to Lee and Yi’s daytime input image would have been a predictable use of prior-art elements according to their established functions, with a reasonable expectation of success.
Regarding Claim 14, Lee in view of Yi teaches the method of claim 11, and additional limitations are met as in the consideration of claim 4 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Arruda et. al (“Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night”) teaches an unsupervised image-to-image translation in which a GAN receives a daytime road image and produces a corresponding night-domain image. The fake dataset, which comprises annotated images of only the target domain (night images), is then used to train the car detector model.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM ADU-JAMFI whose telephone number is (571)272-9298. The examiner can normally be reached M-T 8:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WILLIAM ADU-JAMFI/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677