DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on December 30, 2025 and January 12, 2026 comply with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Citations which have not been considered, have not been considered because they do not comply with 37 CFR 1.98(b) which states “The date of publication supplied must include at least the month and year of publication, except that the year of publication (without the month) will be accepted if the applicant points out in the information disclosure statement that the year of publication is sufficiently earlier than the effective U.S. filing date and any foreign priority date so that the particular month of publication is not in issue”.
Interpretation under 35 U.S.C. §112(f)
Applicant’s arguments, see page 2, line 9 through line 19, filed February 5, 2026, with respect to the interpretation of claims 14-18 under 35 U.S.C. §112(f), have been fully considered but are not persuasive. The examiner respectfully disagrees, because the generic placeholders are not modified by sufficient structure, material, or acts for performing the claimed function. The interpretation of claims 14-18 under 35 U.S.C. §112(f) is proper and is hereby maintained.
Double Patenting Rejection
Applicant’s arguments, see page 6, line20 through line 23, filed February 5, 2026, with respect to the rejection of claims 1-19 on the ground of nonstatutory double patenting as being unpatentable over claims 1-5, 7-8 and 10-20 of U.S. Patent Application No. 18/373,814, have been fully considered but are not persuasive.
Applicant argues on page 6, line 20 through line 23 that “A terminal disclaimer was filed in copending Application Serial No. 18/373,814on December 29, 2025 with respect to the present application. Accordingly, no additional disclaimer is needed in this application, and withdrawal of this ground of rejection is requested”, the examiner respectfully disagrees.
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent, as such a terminal disclaimer is required because copending Application Serial No. 18/373,814 has been allowed. The rejection of claims 1-19 on the ground of nonstatutory double patenting as being unpatentable over claims 1-5, 7-8 and 10-20 of U.S. Patent Application No. 18/373,814 is therefore proper and is hereby maintained.
Rejection under 35 U.S.C. §102
Applicant’s arguments, see page 2, line 21 through page 6, line 12, filed February 5, 2026, with respect to the rejection of claims1-8 and 11-19 under 35 U.S.C. §102(a)(1) as being anticipated by Watanabe et al. (U.S. Patent Application Publication No. US 20050104974/ A1), have been fully considered and are persuasive. The rejection of claims 1-8 and 11-19 under 35 U.S.C. §102(a)(1) as being anticipated by Watanabe et al. (U.S. Patent Application Publication No. US 20050104974/ A1) has been withdrawn.
Rejection under 35 U.S.C. §103
Applicant’s arguments, see page 6, line 14 through line 18, filed February 5, 2026, with respect to the rejection of claim 10 under 35 U.S.C. §103(a) as being unpatentable over Watanabe et al. (U.S. Patent Application Publication No. US 20050104974/ A1) in view of Hrytzak et al. (U.S. Patent No. 5,327,257), have been fully considered and are persuasive. The rejection of claim 10 under 35 U.S.C. §103(a) as being unpatentable over Watanabe et al. (U.S. Patent Application Publication No. US 20050104974/ A1) in view of Hrytzak et al. (U.S. Patent No. 5,327,257) has been withdrawn.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to:
http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp
Claims 1-19 are provisionally rejected on the ground of nonstatutory
obviousness-type double patenting as being unpatentable over claims of 1-5, 7-8 and 10-20 of U.S. Patent Application No.18/373,814. Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims of the instant application are broader in every aspect than the claims in the above-listed reference application and therefore obvious variants thereof.
This is a provisional nonstatutory obviousness-type double patenting rejection
because the patentably indistinct claims have not in fact been patented.
For example, comparing representative claim 9 of the present application with representative claim 1 of copending U.S. Patent Application No.18/373,814. Claim 9 of the present application recites: A method of applying adaptive sharpening, for a block of input pixels, to determine a block of output pixels, the method comprising (Claim 1 of issued U.S. Patent Application No.18/373,814 recites: A method of applying adaptive sharpening, for a block of input pixels for which upsampling is performed, to determine a block of output pixels, the method comprising); obtaining a block of sharp pixels based on the block of input pixels, the block of sharp pixels being for representing a sharp version of the block of output pixels (Claim 1 of issued U.S. Patent Application No.18/373,814 recites: obtaining a block of sharp pixels based on the block of input pixels, the block of sharp pixels being for representing a sharp version of the block of output pixels); determining one or more indications of contrast for the block of input pixels (Claim 1 of issued U.S. Patent Application No.18/373,814 recites: determining one or more indications of contrast for the block of input pixels); determining each of the output pixels of the block of output pixels by performing a respective weighted sum of (i) a corresponding input pixel in the block of input pixels and (ii) a corresponding sharp pixel in the block of sharp pixels (Claim 1 of issued U.S. Patent Application No.18/373,814 recites: and determining each of the output pixels of the block of output pixels by performing a respective weighted sum of (i) a corresponding non-sharp upsampled pixel in the block of non-sharp upsampled pixels and (ii) a corresponding sharp upsampled pixel in the block of sharp upsampled pixels); wherein the weights of the weighted sums are based on the determined one or more indications of contrast for the block of input pixels determining two weights a first weight, Winput , and a second weight, Wsharp, and wherein the input pixels are multiplied by the first weight, Winput , in the weighted sums and wherein the sharp pixels are multiplied by the second weight, Wsharp, in the weighted sums and wherein for a majority of the range of possible indications of contrast the first weight, Wnon-sharp , is larger than the second weight, Wsharp , the indicated contrast is relatively high, and (ii) the first weight, Wnon-sharp , is smaller than the second weight, Wsharp , when the indicated contrast is relatively low (Claim 1 of issued U.S. Patent Application No.18/373,814 recites: wherein the method further comprises determining the weights of the weighted sums based on the determined one or more indications of contrast for the block of input pixels, wherein said determining weights comprises determining two weights a first weight, Wnon-sharp, and a second weight, Wsharp, and wherein the non-sharp upsampled pixels in the block of non-sharp unsampled pixels are multiplied by the first weight, Wnon-harp , in the weighted sums and wherein the sharp upsampled pixels in the block of sharp upsampled pixels are multiplied by the second weight, Wsharp , in the weighted sums, and wherein for a majority of the range of possible indications of contrast the first weight, Wnon-sharp , is larger than the second weight, Wsharp , the indicated contrast is relatively high, and the first weight, Wnon-sharp , is smaller than the second weight, Wsharp , when the indicated contrast is relatively low).
As the comparison shows the claims recite common subject matter, and the differences relate to variations of the claimed limitations, and the processing is carried out on the data and/or elements in no way affects how the data would be received from an input, processed and output within the context of the claims. Therefore, the substitution of the different variations would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention. While claim 1 of copending U.S. Patent Application No.18/373,814 includes additional limitations that are not set forth in the instant claim 9, the use of transitional term "comprising" in the instant claim 9 fails to preclude the possibility of additional elements, so that instant claim 9 fails to define an invention that is patentably distinct from claim 1 of copending U.S. Patent Application No.18/373,814. Furthermore, the elements of instant claim 9 are fully anticipated by the patented claim, and anticipation is “the ultimate or epitome of obviousness (In re Kalm, 154 USPQ 10 (CCPA 1967), also In re Dailey, 178 USPQ 293 (CCPA 1973) and In re Pearson, 181 USPQ 641 (CCPA 1974)).
Claims 1-8 and 10-19 of the present application recite limitations which are in most cases word for word the same limitations as found in claims 2-5, 7-8 and 10-20 respectively of copending U.S. Patent Application No.18/373,814.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a processing module” configured to “apply adaptive sharpening, for a block of input pixels, to determine a block of output pixels” in claim 14; “contrast determination logic” configured to “determine one or more indications of contrast for the block of input pixels” in claim 14; “output pixel determination logic” configured to “receive a block of sharp pixels based on the block of input pixels, the block of sharp pixels being for representing a sharp version of the block of output pixels and determine each of the output pixels of the block of output pixels by performing a respective weighted sum of a corresponding input pixel in the block of input pixels and a corresponding sharp pixel in the block of sharp pixels, wherein the weights of the weighted sums are based on the determined one or more indications of contrast for the block of input pixels” in claim 14; “the contrast determination logic” is configured to “identify a minimum pixel value and a maximum pixel value within a window of input pixels, wherein the window of input pixels covers at least a region represented by the block of output pixels and determine a difference between the identified minimum and maximum pixel values within the window of input pixels” in claim 15; “weight determination logic” configured to “determine the weights of the weighted sums based on the determined one or more indications of contrast for the block of input pixels” in claim 16; and “pixel determining logic” configured to “determine the block of sharp pixels based on the block of input pixels and to provide the block of sharp pixels to the output pixel determination logic” in claim 17.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
New Grounds of Rejection
Applicant’s arguments with respect to claims 1-8, 10-14 and 17-19 have been considered but are moot because of the new ground of rejection.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. §102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 2, 4-7, 10-14 and 16-19 are rejected under 35 U.S.C. §102(a)(1) as being anticipated by Hrytzak et al. (U.S. Patent No. 5,327,257) (hereafter referred to as “Hrytzak”).
The examiner would like to point out that the various “units” identified in section 11 hereinabove are being interpreted under 35 U.S.C. 112(f) as described in FIG. 5.
FIG. 15 is a schematic diagram showing the hardware configuration of the super resolution processing module. The above-mentioned configuration of the super resolution processing module is a functional configuration achieved by cooperation of the hardware configuration shown in FIG.15 and a program. As shown in FIG. 15, the super resolution processing module includes a Digital Signal Processor, a memory, a storage, and an input/output IF as a hardware configuration. These are connected to each other by a bus. The Digital Signal Processor controls another configuration in accordance with a program stored in the memory, performs data processing in accordance with the program, and stores the processing result in the memory. The Digital Signal Processor can be a microprocessor. The memory stores a program executed by the Digital Signal Processor and data. The memory can be a ROM (Read Only Memory).
With regard to claim 1, Hrytzak describes obtaining a block of sharp pixels based on the block of input pixels, the block of sharp pixels being for representing a sharp version of the block of output pixels (refer for example to column 4, lines 20-23); determining one or more indications of contrast for the block of input pixels (refer for example to column 5, paragraphs two through six, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”); and determining each of the output pixels of the block of output pixels by performing a respective weighted sum of a corresponding input pixel in the block of input pixels and a corresponding sharp pixel in the block of sharp pixels (refer for example to column 5, lines 40-45); wherein the weights of the weighted sums are based on the determined one or more indications of contrast for the block of input pixels (refer for example to column 5, paragraphs two through six, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”, and refer to column 5, lines 55-58, “derivation of S is locally adaptive, i.e. by generating image content variables at the local position where interpolation is taking place”).
As to claim 2, Hrytzak describes the one or more indications of contrast for the block of input pixels is a single indication of contrast for the block of input pixels (refer for example to column 5, lines 3-6, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”), wherein the weights of the weighted sums for determining the output pixels of the block of output pixels are based on the single indication of contrast (refer for example to column 5, lines 15-18, “This gives zero in a flat image area, a large positive value when centered at the bottom of an edge, and a large negative value when centered at the top of an edge”); or the one or more indications of contrast for the block of input pixels comprises a plurality of indications of contrast for the block of input pixels (refer for example to column 8, lines 62-62, “Weighting or scale factors are derived to represent a measure of local image contrast for each channel: Sc1Sm1Sy1”), wherein for each of the output pixels of the block of output pixels, the weights of the weighted sum for determining that output pixel are based on a respective one of the plurality of indications of contrast (refer for example to column 9, lines 11-13, “The net scale factors, ScSmSy, are calculated. The scale factors are derived from the contrast and density weighting coefficients”).
With regard to claim 4, Hrytzak describes wherein the determining one or more indications of contrast for the block of input pixels comprises determining a standard deviation or a variance of the input pixel values within a window of input pixels (refer for example to column 5, lines 14-17, “This gives zero in a flat image area, a large positive value when centered at the bottom of an edge, and a large negative value when centered at the top of an edge”), wherein the window of input pixels covers at least a region represented by the block of output pixels (refer for example to column 4, lines 34-35, “The above method is then repeated by moving to the next local position in the input image for interpolation”).
As to claim 5, Hrytzak describes determining the weights of the weighted sums based on the determined one or more indications of contrast for the block of input pixels (refer for example to column 5, paragraphs two through six, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”).
In regard to claim 6, Hrytzak describes wherein said determining the weights comprises determining two weights a first weight, Winput , and a second weight, Wsharp , and wherein the input pixels are multiplied by the first weight, Winput , in the weighted sums and wherein the sharp pixels are multiplied by the second weight, Wsharp, in the weighted sums (refer for example to column 5, lines 39-45, “The algorithm by which the interpolated output pixel P is derived adaptively is given by: P = S X + (1 - S) Y where S is the scale factor, and X and Y represent relatively sharp and relatively soft output image pixels”).
With regard to claim 7, Hrytzak describes wherein both Winput and Wsharp are in a range from 0 to 1 (refer for example to column 5, lines 36-37, “weighing factors obtained, each with a value from o to 1”), and Wherein Winput + Wsharp = 1 (refer for example to column 5, lines 39-45, “The algorithm by which the interpolated output pixel P is derived adaptively is given by: P = S X + (1 - S) Y where S is the scale factor, and X and Y represent relatively sharp and relatively soft output image pixels”).
With regard to claim 10, although Hrytzak describes wherein if the indication of contrast is below a threshold indicating that the block of input pixels is substantially flat then the first weight, Winput , is determined to be greater than zero and the second weight, Wsharp , is determined to be zero (refer for example to column 5, lines 3-6 which describes “This gives zero in a flat image area, a large positive value when centered at the bottom of an edge, and a large negative value when centered at the top of an edge).
As to claim 11, Hrytzak describes wherein said obtaining a block of sharp pixels comprises determining the block of sharp pixels by implementing a sharpening technique on the block of input pixels (refer for example to column 2, lines 45-47, “The first interpolated output image pixel may be obtained by the application of a first interpolation algorithm producing a relatively sharp result”).
In regard to claim 12, Hrytzak describes the block of input pixels is a 4x4 block of input pixels, the block of output pixels is a 2x2 block of output pixels, and the block of sharp pixels is a 2x2 block of sharp pixels (refer for example to column 4, lines 20-35, “using the input pixel data to generate an interpolated pixel data set comprising a sharp and a soft interpolated output image pixel (X,Y), respectively” … “The above method is then repeated by moving to the next local position in the input image for interpolation”).
With regard to claim 13, Hrytzak describes outputting the block of output pixels for storage in a memory, for display or for transmission (see Figure 12 and refer for example to column 10, line 55 through column 11, line 4 which describes that the output pixels are viewed on a display and transmitted to in the ink jet printer for printing).
As to claim 14, Hrytzak describes a processing module comprising contrast determination logic (see Figure 5 and refer for example to column 8, lines 24-36) configured to determine one or more indications of contrast for the block of input pixels (refer for example to column 5, paragraphs two through six, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”); and output pixel determination logic (see Figure 5 and refer for example to column 8, lines 24-36) configured to receive a block of sharp pixels based on the block of input pixels, the block of sharp pixels being for representing a sharp version of the block of output pixels (refer for example to column 4, lines 20-23); and determine each of the output pixels of the block of output pixels by performing a respective weighted sum of a corresponding input pixel in the block of input pixels and a corresponding sharp pixel in the block of sharp pixels (refer for example to column 5, lines 40-45), wherein the weights of the weighted sums are based on the determined one or more indications of contrast for the block of input pixels (refer for example to column 5, paragraphs two through six, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”, and refer to column 5, lines 55-58, “derivation of S is locally adaptive, i.e. by generating image content variables at the local position where interpolation is taking place”).
With regard to claim 16, Hrytzak describes weight determination logic (refer for example column 8, lines 27-36) configured to determine the weights of the weighted sums based on the determined one or more indications of contrast for the block of input pixels (refer to column 5, paragraphs two through six, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”, and refer to column 5, lines 55-58, “derivation of S is locally adaptive, i.e. by generating image content variables at the local position where interpolation is taking place”).
As to claim 17, Hrytzak describes pixel determining logic (refer for example column 8, lines 27-36) configured to determine the block of sharp pixels based on the block of input pixels (refer for example to column 4, lines 20-23 “using the input pixel data to generate an interpolated date set comprising a sharp and a soft interpolated output image pixel (X,Y), respectively”); and to provide the block of sharp pixels to the output pixel determination logic (refer for example to column 5, lines 39-40, “The algorithm by which the interpolated output pixel P”).
In regard to claim 18, Hrytzak describes wherein the processing module is embodied in hardware on an integrated circuit (refer for example column 8, lines 27-36).
With regard to claim 19, Hrytzak describes a non-transitory computer readable storage medium having stored thereon an integrated circuit definition dataset that when processed in an integrated circuit manufacturing system, configures the integrated circuit manufacturing system to manufacture a processing module which is configured to apply adaptive sharpening, for a block of input pixels, to determine a block of output pixels (refer for example to column 4, lines 20-23); the processing module comprising contrast determination logic (see Figure 5 and refer for example to column 8, lines 24-36) configured to determine one or more indications of contrast for the block of input pixels (refer for example to column 5, paragraphs two through six, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”); and output pixel determination logic (see Figure 5 and refer for example to column 8, lines 24-36) configured to receive a block of sharp pixels based on the block of input pixels, the block of sharp pixels being for representing a sharp version of the block of output pixels (refer for example to paragraph [0080], paragraphs [0016] through [0018] and paragraph [0184]); and determine each of the output pixels of the block of output pixels by performing a respective weighted sum of a corresponding input pixel in the block of input pixels and a corresponding sharp pixel in the block of sharp pixels (refer for example to column 5, lines 40-45), wherein the weights of the weighted sums are based on the determined one or more indications of contrast for the block of input pixels (refer to column 5, paragraphs two through six, “The image contrast is derived by the use of contrast detector mask coefficients, in the present example, which are stored in memory and are fixed values to detect edges or contrast of an image”, and refer to column 5, lines 55-58, “derivation of S is locally adaptive, i.e. by generating image content variables at the local position where interpolation is taking place”).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. §103(a) which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 3 and 15 are rejected under 35 U.S.C. §103(a) as being unpatentable over Hrytzak et al. (U.S. Patent No. 5,327,257) in view of Sharda ("Understanding Image Contrast Algorithms” – cited on the IDS filed on 12/30/2025) (hereafter referred to as “Sharda”).
The arguments advanced in section 15 above, as to the applicability of Hrytzak, are incorporated herein.
In regard to claims 3 and 15, Hrytzak describes wherein said determining one or more indications of contrast for the block of input pixels comprises identifying a minimum pixel value and a maximum pixel value within a window of input pixels, wherein the window of input pixels covers at least a region represented by the block of output pixels (refer for example to column 4, lines 34-35, “The above method is then repeated by moving to the next local position in the input image”), however Hrytzak does not explicitly describe determining a difference between the identified minimum and maximum pixel values within the window of input pixels, although such a technique is well known and widely utilized in the prior art.
Sharda describes determining a difference between the identified minimum and maximum pixel values within the window of input pixels (refer for example to page 7, paragraph 2 “Contrast is really just a measure of the difference between the maximum and minimum pixel intensities in an image”).
Given the teachings of the two references and the same environment of operation, namely that of systems that provide for obtaining contrast in an image, it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the Hrytzak system in the manner described by Sharda according to known methods to yield predictable results and would have been motivated to do so with a reasonable expectation of success in order to provide for increased processing efficiency and higher accuracy as suggested by Sharda, which fails to patentably distinguish over the prior art absent some novel and unexpected result.
Claim 8 is rejected under 35 U.S.C. §103(a) as being unpatentable over Hrytzak et al. (U.S. Patent No. 5,327,257) in view of Image Effects (“Unsharp Mask” – cited on the IDS filed on 12/30/2025) (hereafter referred to as “Image Effects”).
The arguments advanced in section 15 above, as to the applicability of Hrytzak, are incorporated herein.
As to claim 8, Hrytzak describes wherein Winput + Wsharp = 1 (refer for example to column 5, lines 39-45, “The algorithm by which the interpolated output pixel P is derived adaptively is given by: P = S X + (1 - S) Y where S is the scale factor, and X and Y represent relatively sharp and relatively soft output image pixels”), however Hrytzak does not describe “wherein a sharpness boost is applied by setting Wsharp greater than 1, although such a technique is well known and widely utilized in the prior art.
Image Effects describes “a sharpness boost is applied by setting Wsharp greater than 1 (see second bullet point which describes “The level of enhancement to be applied to the fine detail. Values greater than 100% will super-enhance any complex areas”).
Given the teachings of the two references and the same environment of operation, namely that of systems that provide for obtaining contrast in an image,it would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the Hrytzak system in the manner described by Image Effects according to known methods to yield predictable results and would have been motivated to do so with a reasonable expectation of success in order to provide for increased processing efficiency and higher accuracy as suggested by Image Effects, which fails to patentably distinguish over the prior art absent some novel and unexpected result.
Allowable Subject Matter
Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Song, Adams Jr. and Takemoto all disclose systems similar to applicant’s claimed invention.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jose L. Couso whose telephone number is (571) 272-7388. The examiner can normally be reached on Monday through Friday from 5:30am to 1:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached on 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Center information webpage on the USPTO website. For more information about the Patent Center, see https://www.uspto.gov/patents/apply/patent-center. Should you have questions about access to the Patent Center, contact the Patent Electronic Business Center (EBC) at 571-272-4100 or via email at: ebc@uspto.gov .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
/JOSE L COUSO/Primary Examiner, Art Unit 2667
February 17, 2026