DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-20 are pending.
Priority
This application is a continuation of International Application No. PCT/CN2022/117817, filed on September 8, 2022, which claims priority to Chinese Patent Application No. 202210208033.X, filed on March 4, 2022, and Chinese Patent Application No. 202111100885.9, filed on September 18, 2021.
Information Disclosure Statement
The IDS filed 01/03/25 and 01/14/25 are considered.
Claim Interpretation
Claim 10 recites “wherein the third loss function comprises an L1 loss function and/or an L2 loss function, and the third loss function further comprises at least one of a multi-scale structural similarity index measure (MS-SSIM) loss function, a perceptual loss function, and a generative adversarial loss function”. Note that according to the Federal Circuit’s 2004 Superguide v. DirecTV decision, “at least one of … and …” requires at least one instance of each and every item listed. Claim 10 recite such limitations. If Applicant intends for an interpretation of only one of these items being required for claim interpretation, Applicant can amend the claim language to, instead recite, “at least one of … or …”. In SuperGuide, the Federal Circuit held that the plain meaning of “at least one of A, B, and C” means: at least one A, at least one of B, and at least one of C. The Court held that if the applicant intended “at least one of A, B, and C” to mean A, B or C, they should have used “OR.”. Hence, claim 10 will be interprette3d as requiring all 3 of the listed losses (multi-scale structural similarity index measure (MS-SSIM) loss function, a perceptual loss function, and a generative adversarial loss function).
Claim 13 is a “process” claim that includes a claimed condition with two possible outcomes,
and that forms two distinct methods within a single claim (“when the loss values of the different areas are determined according to one loss function, the at least two weights are different” and “when the loss values of the different areas are determined according to at least two loss functions, the at least two weights are different or the same”).
A contingent step, when present in a “process” claim only, creates two or more process pathways within the claim based on a condition, where one pathway/step may be traversed and the process terminates, and the other pathway/step is no longer required of the prior art, or vice-versa. Quoting Ex parte RANDAL C. SCHULHAUSER, UNITED STATES PATENT AND TRADEMARK OFFICE, BEFORE THE PATENT TRIAL AND APPEAL BOARD, Precedential decision of 04/28/2016, “If the condition for performing a contingent step is not satisfied, the performance recited by the step need not be carried out in order for the claimed method to be performed” at decision page 10, “the broadest reasonable interpretation of claim 1 includes an instance in which the step of "determining the current activity level of the subject" and the remaining steps based thereon do not take place. Thus, under the broadest reasonable interpretation, the step of "comparing the respiration data with a threshold respiration criteria for indicating a strong
likelihood of a cardiac event if the current activity level is below a threshold activity level"
recited in claim 8 is not necessarily performed” at decision page 11.
Therefore, in the case of claim 13 the prior art need only teach one of the two distinct
claimed methods for obviousness.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5, 7, 11, and 16-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ma et al. (“Variable Rate ROI Image Compression Optimized for Visual Quality” Hereinafter “Ma”).
Regarding claim 1, Ma teaches a method for determining an image loss value, comprising:
compressing and decompressing a first image by using an image encoding and decoding network, to obtain a second image, wherein the second image is a reconstructed image of the first image (Page 1937, Fig. 1: The first image can be seen on the left entering the image compression network, and the second image can be seen being decompressed on the right which is output from the compression network);
determining a partition indication map of the first image (Page 1937, section 2.2.1: “Therefore, we adapt a convolution layer (the filter size is 51, and weights are all set to 1) to generate a 2D ROI mask RM2D to smooth the saliency map”. This section describes the ROI mask generated which acts functionally similar to a partition indication map in separating parts of the image from each other (the ROI from the background));
determining, based on the partition indication map and according to at least one loss function, loss values of different areas in the second image relative to corresponding areas in the first image (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. This section teaches the loss which differs for the ROI region and the background region); and
determining, based on the loss values of the different areas, a total loss value of the second image relative to the first image (Page 1939, section 2.5: “With a ROI loss that protects key information of contents and reduce substantial redundancy in backgrounds, we further introduce a conditional GAN in the rate-distortion trade-off to maintain high perceptual fidelity of reconstructed images at low bit-rate, as that in [13], where the information used in conditional GAN is ROI latents, as is defined in Eq.[3,4]”. Loss functions are used for optimizing the models. Training a compression model with loss between images is how compression models are trained, by separating the losses for each of the image areas, it can optimize the certain areas in the image “we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. So both losses would be used to train the models and if both losses are used, they are part of a “total” loss that’s trains the model).
Regarding claim 2, Ma teaches the method according to claim 1, wherein the
determining, based on the partition indication map and according to at least one loss function, loss values of different areas in the second image relative to the first image comprises:
determining, based on the partition indication map and according to a first loss function, a loss value of a first-type area in the second image relative to a first-type area in the first image, to obtain a first loss value, wherein the loss values of the different areas comprise the first loss value and a second loss value (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. The ROI mask separates the ROI and background regions. They use the a loss for the ROI region (first) and a loss for the background regions (second) which are different in their formulation which can be seen in Equations 3 and 4. The loss for these regions is based on the first-type and second-type (ROI and background) areas in the second image relative to the first image, which can be seen with the lines in Fig. 1 going from the second image into the ROI and background loss, and lines from the first image into the ROI and background loss); and
determining, based on the partition indication map and according to a second loss function, a loss value of a second-type area in the second image relative to a second-type area in the first image, to obtain the second loss value (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. The ROI mask separates the ROI and background regions. They use the a loss for the ROI region and a loss for the background regions which are different in their formulation which can be seen in Equations 3 and 4. The loss for these regions is based on the first-type and second-type (ROI and background) areas in the second image relative to the first-type and second-type (ROI and background) areas in the first image, which can be seen with the lines in Fig. 1 going from the second image into the ROI and background loss, and lines from the first image into the ROI and background loss).
Regarding claim 5, Ma teaches the method according to claim 2, wherein the second loss function comprises at least one of a multi-scale structural similarity index measure (MS-SSIM) loss function, a perceptual loss function, or (Page 1937, section 2.2.2: “While, dBG includes MS SSIM and a perceptual loss LPIPS as dP , which proves to be closer to human visual evaluation standards. The default λp is 0.5”. The dBG is the background loss function which acts as the second loss function. The “or” limitation means the list is in the disjunctive so only one of the listed items need be taught to teach the claim).
Regarding claim 7, Ma teaches the method according to claim 1, wherein the determining, based on the partition indication map and according to at least one loss function, loss values of different areas in the second image relative to the first image comprises:
determining, based on the partition indication map and according to a first loss function, a loss value of a first-type area in the second image relative to a first-type area in the first image, to obtain a first loss value, wherein the loss values of the different areas comprise the first loss value and a third loss value (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. The ROI mask separates the ROI and background regions. They use the a loss for the ROI region (first) and a loss for the background regions (third) which are different in their formulation which can be seen in Equations 3 and 4. The loss for these regions is based on the first-type and second-type (ROI and background) areas in the second image relative to the first image, which can be seen with the lines in Fig. 1 going from the second image into the ROI and background loss, and lines from the first image into the ROI and background loss); and
determining, according to a third loss function, a loss value of the second image relative to the first image, to obtain the third loss value (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. The ROI mask separates the ROI and background regions. They use the a loss for the ROI region (first) and a loss for the background regions (third) which are different in their formulation which can be seen in Equations 3 and 4. The loss for these regions is based on the first-type and second-type (ROI and background) areas in the second image relative to the first-type and second-type (ROI and background) areas in the first image, which can be seen with the lines in Fig. 1 going from the second image into the ROI and background loss, and lines from the first image into the ROI and background loss).
Regarding claim 11, Ma teaches the method according to claim 2, wherein the first loss function comprises an L1 loss function and/or an L2 loss function (Page 1937, section 2.2.2: “dROI uses MSE as a measurement, and it only takes effect in the ROI”. MSE is a L2 loss function. The “or” limitation means the list is in the disjunctive so only one of the listed items need be taught to teach the claim).
Regarding claim 16, Ma teaches the method according to claim 2, wherein the partition indication map is an image segmentation mask map, the first-type area comprises an area in which a target object is located, and the second-type area comprises an area in which a non-target object is located (Page 1937, section 2.2.1: “Therefore, we adapt a convolution layer (the filter size is 51, and weights are all set to 1) to generate a 2D ROI mask RM2D to smooth the saliency map”. This section describes the ROI mask generated which acts functionally similar to a partition indication map in separating parts of the image from each other (the ROI from the background). The ROI acts as the target and the background acts as the non-target).
Regarding claim 17, Ma teaches the method according to claim 16, wherein the first-type area comprises a face area of the target object (Page 1937, Fig. 1: A face area can be seen as the target object for the ROI mask”).
Regarding claim 18, Ma teaches an apparatus for determining an image loss value, wherein the apparatus comprises:
an encoding and decoding module, configured to compress and decompress a first image by using an image encoding and decoding network, to obtain a second image, wherein the second image is a reconstructed image of the first image (Page 1937, Fig. 1: The encoding and decoding module can be seen. The encoding model compresses the first image and the decoding model decompresses the compressed representation the generate a second image);
a first determining module, configured to determine a partition indication map of the first image (Page 1937, Fig. 1: The ROI network acts as the determining module which determines the partition indication map (ROI mask);
a second determining module, configured to determine, based on the partition indication map and according to at least one loss function, loss values of different areas in the second image relative to corresponding areas in the first image (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. A processor or model must be present to calculate the loss of the ROI and background. That processor or model acts as the second demining module); and
a third determining module, configured to determine, based on the loss values of the different areas, a total loss value of the second image relative to the first image (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. The ROI mask separates the ROI and background regions. They use the a loss for the ROI region and a loss for the background regions which are different in their formulation which can be seen in Equations 3 and 4. The loss for these regions is based on the first-type and second-type (ROI and background) areas in the second image relative to the first-type and second-type (ROI and background) areas in the first image, which can be seen with the lines in Fig. 1 going from the second image into the ROI and background loss, and lines from the first image into the ROI and background loss. A processor or model must be present to calculate the total loss, and would act as the third determining module).
Regarding claim 19, Ma teaches the apparatus according to claim 18, wherein the loss values of the different areas comprise a first loss value and a second loss value; and
the second determining module comprises:
a first determining submodule, configured to determine, based on the partition indication map and according to a first loss function, a loss value of a first-type area in the second image relative to a first-type area in the first image, to obtain the first loss value (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. A processor or model must be present to calculate the loss of the ROI. That processor or model acts as the first determining submodule); and
a second determining submodule, configured to determine, based on the partition indication map and according to a second loss function, a loss value of a second-type area in the second image relative to a second-type area in the first image, to obtain the second loss value(Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. A processor or model must be present to calculate the loss of the background. That processor or model acts as the second demining submodule) .
Regarding claim 20, the content of claim 20 is similar to the content of claim 1, with the additional teachings of a non-transitory computer readable-storage medium. Ma also discloses this information (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. A processor or model must be present to calculate the loss of the background. The instructions for either the processor or model must be stores on a non-transitory computer readable-storage medium). Therefore, claim 20 is rejected for the same reasons of anticipation as claim 1, along with the additional teachings above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Ma et al. (“Variable Rate ROI Image Compression Optimized for Visual Quality” Hereinafter “Ma”) in view of LI et al. (US 20230033458 A1 Hereinafter “LI”).
Regarding claim 6, Ma teaches the method according to claim 2, further comprising:
after the determining, based on the loss values of the different areas, a total loss value of the second image relative to the first image, (Page 1939, section 2.5: “With a ROI loss that protects key information of contents and reduce substantial redundancy in backgrounds, we further introduce a conditional GAN in the rate-distortion trade-off to maintain high perceptual fidelity of reconstructed images at low bit-rate, as that in [13], where the information used in conditional GAN is ROI latents, as is defined in Eq.[3,4]”. Loss functions are used for optimizing the models. Training a compression model with loss between images is how compression models are trained, by separating the losses for each of the image areas, it can optimize the certain areas in the image “we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. So both losses would be used to train the models and if both losses are used, they are part of a “total” loss that’s trains the model); and
updating, (Page 1939, section 3.1: “Models are trained in two stages. Firstly, it’s trained without GAN to initialize parameters stably, then the model with GAN are trained to improve subjective quality”).
Ma does not expressly disclose using an optimization map for updating the models.
However, LI teaches using an optimization map for training models ([0059]: “Different from the normal training procedure, the loss L.sub.train (f.sub.θ (T.sub.L, T.sub.H) is weighted by the weight map 204 and becomes L′.sub.train (f.sub.θ(T.sub.L, T.sub.H), as illustrated in equation (3) and in FIG. 4A where the loss data 402 is combined 404 with the weight map 204 to produce weighted loss data 406”. The weighted map tells of certain areas that require more attention when determining the loss for the image, by applying loss to the weighted map you get a map which defines the loss for different areas of the image leading to describing which areas need to be optimized).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify Ma’s loss function for training models to include LI’s loss map for training models because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, LI’s loss map for training models permits using a map for training the model to know what areas of the image require optimization, allowing targeted areas carry more importance than other areas. This known benefit in LI is applicable to Ma’s loss function for training models as they both share characteristics and capabilities, namely, they are directed to determining the loss between two images for training models, where the loss for different areas of the image is different. Therefore, it would have been recognized that modifying Ma’s loss function for training models to include LI’s loss map for training models would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate LI’s loss map for training models in determining the loss between two images for training models, where the loss for different areas of the image is different and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
Regarding claim 13, Ma teaches the method according to claim 1, wherein the determining, based on the loss values of the different areas, a total loss value of the second image relative to the first image comprises:
(Page 1939, section 2.5: “With a ROI loss that protects key information of contents and reduce substantial redundancy in backgrounds, we further introduce a conditional GAN in the rate-distortion trade-off to maintain high perceptual fidelity of reconstructed images at low bit-rate, as that in [13], where the information used in conditional GAN is ROI latents, as is defined in Eq.[3,4]”. Loss functions are used for optimizing the models. Training a compression model with loss between images is how compression models are trained, by separating the losses for each of the image areas, it can optimize the certain areas in the image “we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. So both losses would be used to train the models and if both losses are used, they are part of a “total” loss that’s trains the model. Each of the losses would have a weight to them if they are used to calculate the total loss, even if the weights are one);
when the loss values of the different areas are determined according to at least two loss functions, the at least two weights are different or the same (The loss values for each area are different. They have inherent weights associated with them if used in a total loss calculation (even if the weights are one). The language “the at least two weights are different or the same” defines any relationship between two weights, since it is impossible for two compared weights to not be both “different” and “the same”. For example, if two weights are not different, they would have to be the same, and if two weights are not the same, they would have to be different, so any two set of weights would satisfy this limitation. Therefore the inherent weights satisfy this limitation. Additionally, as mentioned in the Schulhauser section in the “claim interpretation” only one of the limitation need be met due to their divergent nature).
Ma does not expressly disclose using a weighted summation using weighted loss function.
However, LI teaches using a weighted summation using weighted loss function ([0059]: “Different from the normal training procedure, the loss L.sub.train (f.sub.θ (T.sub.L, T.sub.H) is weighted by the weight map 204 and becomes L′.sub.train (f.sub.θ(T.sub.L, T.sub.H), as illustrated in equation (3) and in FIG. 4A where the loss data 402 is combined 404 with the weight map 204 to produce weighted loss data 406”).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify Ma’s loss function for training models to include LI’s weighted summation for training models because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, LI’s l weighted summation for training models permits using a loss function that weights different areas of the image differently, allowing for area with different importance to be optimized differently. This known benefit in LI is applicable to Ma’s loss function for training models as they both share characteristics and capabilities, namely, they are directed to determining the loss between two images for training models, where the loss for different areas of the image is different. Therefore, it would have been recognized that modifying Ma’s loss function for training models to include LI’s weighted summation for training models would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate LI’s weighted summation for training models in determining the loss between two images for training models, where the loss for different areas of the image is different and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Ma et al. (“Variable Rate ROI Image Compression Optimized for Visual Quality” Hereinafter “Ma”) in view of ZHANG et al. (US 20240296624 A1 Hereinafter “ZHANG”).
Regarding claim 12, Ma teaches the method according to claim 2, wherein the determining, based on the partition indication map and according to a first loss function, a loss value of a first-type area in the second image relative to a first-type area in the first image, to obtain the first loss value comprises:
determining, based on the partition indication map, (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. This section teaches the distortion loss which differs for the ROI region and the background region); and
determining the first loss value (Page 1937, section 2.2.2: “Under the guidance of RM2D, we use differentiated loss functions to optimize the ROI and the background area, dROI and dBG”. This section teaches the distortion loss which differs for the ROI region and the background region).
Ma does not expressly finding errors of pixels in regions to determine loss values.
However, ZHANG teaches using errors of pixels between image to calculate loss values ([0078]: “In S303, a corresponding image pixel loss function is acquired by calculating, based on the skin masks, pixel errors of the same pixel points in facial skin regions in the three-dimensional faces and the training samples”).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify Ma’s loss function to include ZHANG’s pixel loss calculation from pixel error because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, ZHANG’s pixel loss calculation from pixel error permits a method for accurately determining the loss between pixels in images, but finding the error first. This known benefit in ZHANG is applicable to Ma’s loss function as they both share characteristics and capabilities, namely, they are directed to determining the loss between two images for training models. Therefore, it would have been recognized that modifying Ma’s loss function to include ZHANG’s pixel loss calculation from pixel error would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate ZHANG’s pixel loss calculation from pixel error in determining the loss between two images for training models and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Ma et al. (“Variable Rate ROI Image Compression Optimized for Visual Quality” Hereinafter “Ma”) in view of ZHANG et al. (US 20170024920 A1 Hereinafter “ZHANG2”).
Regarding claim 14, Ma teaches the method according to claim 2, wherein the partition indication map
Ma does not expressly disclose the partition map being an image gradient map.
However, ZHANG2 teaches using a gradient map to generate mask images ([0018]: “generating a mask image by collecting statistics of gradient information of each region”. Gradient information collected from each region is functionally similar to a map of gradient information. If a map of gradient information was used to obtain the mask of Ma, it would result in the partition indication map being an image gradient map (a mask produced from one). This would result in the first area being the mask region containing the face (which has more structure) and a second area being the background (being unstructured).
At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify Ma’s mask generation to include ZHANG2’s mask generation using gradient information because such a modification is the result of applying a known technique to a known device ready for improvement to yield predictable results. More specifically, ZHANG2’s mask generation using gradient information permits a method for accurately generating a mask using gradient information in the image. This known benefit in ZHANG2 is applicable to Ma’s mask generation as they both share characteristics and capabilities, namely, they are directed to generating masks to separate areas in images for further processing. Therefore, it would have been recognized that modifying Ma’s mask generation to include ZHANG2’s mask generation using gradient information would have yielded predictable results because (i) the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate ZHANG2’s mask generation using gradient information and (ii) the benefits of such a combination would have been recognized by those of ordinary skill in the art.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Ma et al. (“Variable Rate ROI Image Compression Optimized for Visual Quality” Hereinafter “Ma”) in view of ZHANG et al. (US 20170024920 A1 Hereinafter “ZHANG2”) in further view of ZUO et al. (KR20090006079A Hereinafter “ZUO”).
Regarding claim 15, the combination of Ma and ZHANG2 teaches the method according to claim 14, in addition, ZHANG2 further teaches wherein the image gradient map is a gradient map represented by gradient masks ([0018]: “generating a mask image by collecting statistics of gradient information of each region”. Maps generated from gradient information are gradient masks”)
The rationale for this combination is similar to the rationale for the combination in the rejection of claim 14 due to similar methods of combination (the gradient mask is the mask that is generated by ZHANG2) and benefits (accurate mask generation).
The combination of Ma and ZHANG2 does not expressly disclose the structured area corresponds to an area, in the image gradient map, in which a gradient mask is 1.
However, ZUO teaches binary and gradient masks being alternatives in mask generation (Page 24, paragraph 3: “A determination for each region is provided to the mask generator 2450, which generates a gradient gradient mask and / or a binary gradient gradient determination mask 2495 that indicates a gradient gradient determination for the regions in the image”. By listing the binary gradient mask as an alternative to the gradient mask, ZUO teaches that one could substitute a gradient mask for a binary gradient mask to reliably separate mask areas. By using a binary gradient mask, the structured area mask would be 1).
At the time the invention was effectively filed, it would have been obvious to one of ordinary skill in the art to substitute the combination of Ma and ZHANG2’s gradient mask with ZUO’s binary gradient mask because such a modification is the result of simple substitution of one known element for another producing a predictable result. More specifically ZUO’s binary gradient mask teaches that one could substitute a gradient mask for a binary gradient mask to reliably separate mask areas, and one of ordinary skill in the art would expect similar effects if substituted for the combination of Ma and ZHANG2’s gradient mask.
Allowable Subject Matter
Claims 3-4 and 8-10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
BESENBRUCH et al. (US 20240107022 A1) teaches finding loss between encoded and decoded images
Coene at al. (US 20130016097 A1) teaches using a gradient mask to substitute for a binary mask
Zuo et al. (US 20110255595 A1) teaches quantization based on texture level).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEFANO A DARDANO whose telephone number is (703)756-4543. The examiner can normally be reached Monday - Friday 11:00 - 7:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Greg Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEFANO ANTHONY DARDANO/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698