Election/Restrictions
Restriction requirement is withdrawn in light of amendment filed after interview.
DETAILED ACTION
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4, 6-7, 19, 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Varslot ( US 20150104078, cited from IDS).
Regarding claim 1, Varslot teaches a method for image processing by one or more processors comprising:
detecting, at one or more central processing units (CPUs) ( [0018], microprocessors) ,an overlap pattern for a set of slice images([0011], registering a first and a second images of at least partially overlapping spatial regions of the sample) of a porous media sample([0081], a sample of porous material);
based on the overlap pattern, determining a set of overlap distances for the set of slice images of the porous media sample([0016], acquiring a first and a second images of at least partially overlapping spatial regions of the sample);
registering, based on at least the set of overlap distances, a composite image comprising any of the set of slice images of the porous media sample([0011], registering a first and a second images of at least partially overlapping spatial regions of the sample);
receiving at one or more graphics processing units (GPUs) from the one or more CPUs the composite image based at least in part on a set of overlap distances for a set of slice images of a porous media sample( [0211], a distance-metric which indicates the quality of the registration between the two interpolated-images):
determining pixel values for each pixel of the composite image( [0235], image of core sample, is pre-processed by processor 3605A, by way of filtering, masking etc., to obtain an image 1704 that is ready for registration) ; and
based on the pixel values, generating a blended image corresponding to the set of slice images of the porous media sample([0235], stitching together subimages to obtain the single image 1710).
Regarding claim 2, Varslot teaches the method of claim 1, wherein the set of overlap distances comprise orientation information for the set of slice images of the porous media sample([0227], The spatial-transforms … was chosen to be the 3D similarity transform (… 7 degrees of freedom comprising 3 translation, 3 rotation and an isotropic scaling parameter).
Regarding claim 3, Varslot teaches the method of claim 1, further comprising:
obtaining the set of slice images of a porous media sample(Fig. 2A, 2B, 3); and
sending the composite image to the one or more GPUs ( [0018], microprocessors).
Regarding claim 4, Varslot teaches the method of claim 1, wherein the composite image comprises at least a first slice image of the set of slice images and a second slice image of the set of slice images, wherein the first slice image overlaps with the second slice image( Fig. 7A, 7B, 7C).
Regarding claim 6, Varslot teaches the method of claim 1, further comprising:
determining one or more common points shared by any of the set of slice images of the porous media sample( [0011], registering a first and a second images of at least partially overlapping spatial regions);
determining a center point shared by each of the set of slice images of the porous media sample( [119], Five correlation peaks closest to the center of the image and their relationship to the grid principal axes u and v, shown in FIG. 10, are determined) ; and
extracting coordinates for one or more lens regions from the set of slice images of the porous media sample based on the center point and the one or more common points( [119], center of the image and their relationship to the grid principal axes u and v, shown in FIG. 10).
Regarding claim 7, Varslot teaches the method of claim 6, further comprising:
computing a similarity index ( [227], similarity transform); and
based on the similarity index, registering the one or more lens regions based on the coordinates( [227], define how coordinates … are registered/overlayed) ; or updating the set of overlap distances.
Claims 19 and 21 recite the apparatus and medium for the method in claim 1. Since Varslot teaches a apparatus and medium ([0058]), those claims are also rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5, 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Varslot in view of MARTIN ( US 20200043134).
Regarding claim 5, Varslot teaches the method of claim 1, further comprising:
receiving the blended image based on the composite image from the one or more GPUs graphics processing units (GPUs) ([0235], stitching together subimages to obtain the single image 1710);
Varslot does not expressly teach
generating an overlap plot based on the blended image; and sending the overlap plot to the one or more GPUs.
However, MARTIN teaches
generating an overlap plot based on the blended image; and sending the overlap plot to the one or more GPUs (FIG. 6).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Varslot and MARTIN, by plotting the overlap of composite images in Varslot with the cross correlation plot taught by MARTIN, with motivation that “A global placement can be performed to position all scanned tile images relative to one another” ( MARTIN, Abstract ).
Regarding claim 15, Varslot teaches the method of claim 1, further comprising:
receiving a coordinate plot from the one or more CPUs based on the blended image([0235], stitching together subimages to obtain the single image 1710);
Varslot does not expressly teach generating a normalized image based on at least the coordinate plot; and sending the normalized image to the one or more CPUs.
However, MARTIN teaches
Varslot does not expressly teach generating a normalized image based on at least the coordinate plot (FIG. 6); and sending the normalized image to the one or more CPUs.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Varslot and MARTIN, by plotting the overlap of composite images in Varslot with the cross correlation plot taught by MARTIN, with motivation that “A global placement can be performed to position all scanned tile images relative to one another” ( MARTIN, Abstract ).
Regarding claim 16, Varslot teaches the method of claim 1.
Varslot does not expressly teach wherein generating a blended image comprises:
determining a lower stack, an upper stack, and at least one domain size from a set of sliced images of a porous media sample;
decomposing each of the set of slice images based on the at least one domain size; and
generating a map of an overlap of the lower stack and the upper stack based on the decomposing.
However, MARTIN teaches
determining a lower stack ( Y1, in FIG. 5A), an upper stack(Y4 in FIG. 5A), and at least one domain size from a set of sliced images([0042], “tile” refers to a region of a whole slide);
decomposing each of the set of slice images based on the at least one domain size ( Fig. 5A); and
generating a map of an overlap of the lower stack and the upper stack based on the decomposing( Fig. 5A).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Varslot and MARTIN, by arranging the composite images in Varslot with the tiling method taught by MARTIN, with motivation that “A global placement can be performed to position all scanned tile images relative to one another” ( MARTIN, Abstract ).
Regarding claim 17, Varslot in view of MARTIN teaches the method of claim 16, where generating a map further comprises:
determining a first buffer region for the lower stack corresponding to a second buffer region for the upper stack(MARTIN, [0109], With M rows and N cols of tiles, there are N(M−1) such pairs of top-bottom images. Thus, we have M(N−1)+M(N−1) linear equations considering left-right and bottom-top pairs);
determining a first registered region for the lower stack corresponding to a second registered region for the upper stack(MARTIN, [0109], number of unknowns is M*N tile locations (top left tile location can be used as reference); and
determining a first isolated region for the lower stack corresponding to a second isolated region for the upper stack(MARTIN, [0109], there are redundant equations); and
discarding the first and second isolated regions(MARTIN, [0109], we can afford to discard equations with very low standard deviation if possible).
Regarding claim 18, Varslot in view of MARTIN teaches the method of claim 16, further comprising:
determining a first registered region for the lower stack corresponding to a second registered region for the upper stack( MARTIN, Fig. 5B); and
generating a stitched vertical image based on at least the first registered region and the second registered regio( MARTIN, ([0051], an image stitching and/or image blending module to seamlessly combine the various images using the placement data ).
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Varslot in view of Teo ( US 6128108).
Regarding claim 8, Varslot teaches the method of claim 1.
Varslot does not expressly teach wherein generating the blended image comprises:
determining a weighted average of the pixel values of the composite images;
stacking the composite image based on the weighted average of the pixel values; and
based on the stacking, blending the composite image.
However, TEO teaches
determining a weighted average of the pixel values of the composite images(Abstract, digital images in the overlapping pixel region by taking weighted averages of their pixel color values);
stacking the composite image based on the weighted average of the pixel values( Abstract, taking a weighted average of the pixel color values, in such a way that the weights used are a value above 50% of image A and a value below 50% of image B to the left of the leftmost curve, 50% of image A and 50% of image B along the middle curve, and a value below 50% of image A and a value above 50% of image B to the right of the rightmost curve);and
based on the stacking, blending the composite image ( Abstract, including aligning the digital images so as to approximately register them in the overlapping pixel region).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Varslot and Teo, by combining the images in Varslot with the averaging method taught by Teo, with motivation of “ composition of two digital images which overlap in an overlapping pixel region” ( Teo, Abstract).
Claim(s) 10-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Varslot in view of SHI ( WO 2018053340).
Regarding claim 10, Varslot teaches the method of claim 1.
Varslot does not expressly teach
obtaining a first data set that includes at least one high-resolution image and at least one low-resolution image generated from the high-resolution image;
training a generator network using the first data set to generate a second data set that includes at least one super resolution image and a plurality of weights based, at least in part, on one or more low-resolution images; and
training a discriminator network using the second data set and reference information, wherein the discriminator network updates the reference information by minimizing perceptual loss.
However, SHI teaches
obtaining a first data set that includes at least one high-resolution image( Fig. 1A) and at least one low-resolution image generated from the high-resolution image( Fig. 1B);
training a generator network using the first data set to generate a second data set that includes at least one super resolution image and a plurality of weights based, at least in part, on one or more low-resolution images( 406 in Fig. 4); and
training a discriminator network using the second data set and reference information, wherein the discriminator network updates the reference information by minimizing perceptual loss( 408 in Fig. 4).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Varslot and SHI, by processing the stitched image generated in Varslot with the super-resolution network taught by SHI, with motivation “to estimate a high resolution version of the low resolution visual data” ( SHI, [0011]).
Regarding claim 11, Varslot in view of SHI teaches the method of claim 10, wherein the at least one high-resolution image is the blended image( processing the stitched image generated in Varslot with the super-resolution network taught by SHI).
Regarding claim 12, Varslot in view of SHI teaches the method of claim 10, wherein the low-resolution image is based on down-sampling of the high-resolution image( SHI, FIG. 1B).
Regarding claim 13, Varslot in view of SHI teaches the method of claim 10, further comprising, at each epoch, training the generator network by determining losses through a pixel-wise loss function and updating weights accordingly through backwards propagation( [0024], a perceptual loss function that consists of an adversarial loss function and a content loss function is proposed).
Regarding claim 14, Varslot in view of SHI teaches the method of claim 1, further comprising:
generating a down-sampled image based on the blended image( SHI, Fig. 1B);;
applying a trained neural network to the down-sampled image to produce a super- resolution image( SHI, 406 in Fig. 4; Fig. 5);
validating the super-resolution image using a trained discriminator network( SHI, 408 in Fig. 4; Fig. 6)); and
outputting the super-resolution image(SHI, 770 in Fig. 7).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANGENG SUN whose telephone number is (571)272-3712. The examiner can normally be reached 8am to 5pm, EST, M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Randolph Vincent can be reached at 571 272 8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JIANGENG SUN
Examiner
Art Unit 2661
/Jiangeng Sun/Examiner, Art Unit 2671