Prosecution Insights
Last updated: April 19, 2026
Application No. 18/021,219

IMAGE RESTORATION SYSTEM AND IMAGE RESTORATION METHOD

Final Rejection §103§112
Filed
Feb 14, 2023
Examiner
ZHAO, CHRISTINE NMN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Hitachi High-Tech Corporation
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
11 granted / 18 resolved
-0.9% vs TC avg
Strong +58% interview lift
Without
With
+58.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
19 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
11.5%
-28.5% vs TC avg
§103
58.2%
+18.2% vs TC avg
§102
8.2%
-31.8% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed August 21, 2025 has been entered. Claims 1-8 and 14-21 remain pending in the application. Applicant’s amendments to the Claims have overcome the 112(a) and 112(b) rejections previously set forth in the Non-Final Office Action mailed June 5, 2025. In addition, the claim interpretation under 112(f) is withdrawn. Claim Objections Claims 2, 6 and 20 are objected to because of the following informalities: In claim 2 line 3, “configured to predicts” should read “configured to predict” In claim 6 lines 4-5, “a modified prediction image corrected by the deformation correction unit” should read “a corrected prediction image” In claim 20 line 4, “of the processor” should be deleted Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 2 and 15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The specification discloses an embodiment 1 of the disclosure in paragraph 0053 (also see FIG. 6), where the deformation prediction unit predicts a deformation amount of a prediction image on the basis of the deformation amount data stored in advance in the deformation amount database 19. The specification discloses a separate embodiment 2 of the disclosure in paragraph 0065 (also see FIG. 10), where the deformation prediction unit is configured as a CNN without using a deformation amount database 19. The specification fails to have support for an embodiment in which the deformation prediction unit predicts a deformation amount based on both a database and a CNN, which is being claimed by claims 2 and 15, since claim 2 includes the limitations of claim 1 and claim 15 includes the limitations of claim 14, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 3, 5-7, 14, 16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (NPL “First image then video: A two-stage network for spatiotemporal video denoising"). Regarding claim 1, Wang discloses an image restoration system that restores image quality of a low quality image (page 2, left hand column [LHC], first full paragraph: “we propose our neural model, called FITVNet…an end-to-end network for video denoising”), the image restoration system comprising: a processor configured to: restore the image quality of the low quality image (Figure 3: Spatial image denoising module; page 2, LHC, first full paragraph: “the first module with an image-to-image network architecture, which reduces the noise within each single image/frame via spatial processing”); predict a deformation amount that has occurred between a first low quality image and a different second low quality image (page 1, right hand column [RHC], first full paragraph; page 5, LHC, first paragraph under 3.3 The spatiotemporal video denoising stage: the spatiotemporal video denoising network captures temporal deformation information between neighboring frames resulting from object motion among these frames), these are included in a series of input low quality images (page 4, LHC, first paragraph under 3.2 The spatial image denoising stage: “an input sequence of 2K + 1 = 5 frames” shown in Figure 3 as the images Ît+k input into the Spatial image denoising module); and correct one of a first prediction image (Figure 3: Spatiotemporal video denoising blocks; page 5, LHC, first paragraph under 3.3 The spatiotemporal video denoising stage: “the module takes 2K + 1 consecutive prior denoised frames Ψ( Ît+k)”), the second low quality image, or a second prediction image, the first prediction image (Figure 3: an image Ψ( Ît+k) output from the Spatial image denoising module) being obtained by applying processing of the processor to the first low quality image (Figure 3: the corresponding image Ît+k input into the Spatial image denoising module), and the second prediction image (Figure 3: another image Ψ( Ît+k) output from the Spatial image denoising module) being obtained by applying the processing of the processor to the second low quality image (Figure 3: the corresponding image Ît+k input into the Spatial image denoising module), wherein training is performed (page 2, LHC, first full paragraph: “These two modules are jointly supervised by a proposed loss function”) to reduce an evaluation of a loss function (Equation 11: Lst) between the first prediction image corrected by the processor (Equation 11: ϕ (Ψ( Ît+k)) ), and the second low quality image or the second prediction image (Equation 11: Ψ(Ît) ), or an evaluation of a loss function between the first prediction image, and the second prediction image or the second low quality image corrected by the processor (this limitation is disclosed in an alternative clause and thus, read only on the first limitation), wherein the processor is configured to receive an input of the first low quality image or the first prediction image (Figure 3: Ψ( Ît+k)) and of the second low quality image or the second prediction image (Figure 3: Ψ(Ît)), and predict the deformation amount to reduce an evaluation of a loss function between the two inputs after deformation correction (Equation 11: the loss function is between the first prediction image after deformation correction and the second prediction image). However, Wang, in the main embodiment (FITVNet), fails to explicitly disclose correcting one of a first prediction image, the second low quality image, or a second prediction image on a basis of the deformation amount. Wang, in another embodiment (DVDNet), discloses correcting an image (Equation 4; page 3, RHC, second paragraph under 3.1 Theoretical background: “utilize these intermediate variables to predict clean reference frame It”) on a basis of the deformation amount (page 3, RHC, second paragraph under 3.1 Theoretical background: “estimate optical flow δkx between observed neighbouring frames and reference frame”). The main embodiment FITVNet realizes the second module as a regular spatiotemporal video denoising network. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the main embodiment of Wang to incorporate the teachings of the other embodiment of Wang as an example of a regular video denoising network to obtain state-of-the-art results (page 3, LHC, first paragraph under 2.2 Video denoising). Regarding claim 3, Wang discloses the image restoration system claimed in claim 1, wherein the series of low quality images is a series of images obtained by imaging a same position of a same sample twice or more (Figure 3: the series of images Ît+k is clearly obtained by imaging a same position of a same scene multiple times). Regarding claim 5, Wang discloses the image restoration system claimed in claim 1, wherein the processor is configured to obtain a prediction image of each low quality image by machine learning that uses a Convolution Neural Network (CNN) (page 4, LHC, first paragraph under 3.2 The spatial image denoising stage: “For the first image denoising stage, we use a modified architecture of Noise2Noise to approximate the above mapping function Ψ, which tries to spatially remove intra-frame noise in an input sequence of 2K + 1 = 5 frames” where it is known in the art that Noise2Noise is a CNN, as evidenced by supporting NPL document “Noise2Noise: Learning Image Restoration without Clean Data” cited by Wang – see Lehtinen page 11, A.1. Network architecture; Table 2). Regarding claim 6, Wang discloses the image restoration system claimed in claim 1, wherein the processor is further configured to: evaluate an error of image restoration (Equation 10: the final loss function for the model includes Lst) using a corrected prediction image (Equation 11: ϕ (Ψ( Ît+k)) ) and a correction-target low quality image (Equation 11: Ψ(Ît) ); and evaluate the error of the image restoration (page 2, LHC, first full paragraph: “These two modules are jointly supervised by a proposed loss function”), using an absolute error, a square error (pages 4-5: the modules are trained with L2 loss which uses squared error), or a likelihood function on a basis of one of a Gaussian distribution, a Poisson distribution, and a gamma distribution. Regarding claim 7, Wang discloses the image restoration system claimed in claim 6, wherein the processor is further configured to: update a parameter of an image restoration model on a basis of an evaluation result of the processor (page 5, RHC, third paragraph under 4.1 Setup: “We optimize the final loss function via ADAM optimizer with default hyperparameters”), wherein the parameter of the image restoration model is updated to reduce the error of the image restoration at the processor (it is known in the art that the algorithm ADAM involves minimizing an objective function with respect to its parameters, returning the resulting parameters, as evidenced by supporting NPL document “ADAM: A method for stochastic optimization” cited by Wang – see Kingma page 2, 2 Algorithm; Algorithm 1). Regarding claim 14, it is the corresponding method whose limitations are encompassed by the system claimed in claim 1. Therefore, Wang discloses the limitations of claim 14 as it does the limitations of claim 1. Regarding claim 16, it is the corresponding method whose limitations are encompassed by the system claimed in claim 3. Therefore, Wang discloses the limitations of claim 16 as it does the limitations of claim 3. Regarding claim 18, it is the corresponding method whose limitations are encompassed by the system claimed in claim 5. Therefore, Wang discloses the limitations of claim 18 as it does the limitations of claim 5. Regarding claim 19, it is the corresponding method whose limitations are encompassed by the system claimed in claim 6. Therefore, Wang discloses the limitations of claim 19 as it does the limitations of claim 6. Regarding claim 20, it is the corresponding method whose limitations are encompassed by the system claimed in claim 7. Therefore, Wang discloses the limitations of claim 20 as it does the limitations of claim 7. Claim(s) 2, 4, 8, 15, 17 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Wang in view of Sekiguchi et al. (US 2015/0036914 A1). Regarding claim 2, Wang discloses the image restoration system claimed in claim 1. However, Wang fails to disclose predicting a deformation amount occurring in the first low quality image using a deformation amount database designed in advance. In the related art of deformation correction from electron beam irradiation, Sekiguchi discloses predicting a deformation amount occurring in the first low quality image using a deformation amount database designed in advance (Sekiguchi paragraphs 0015, 0059: “a shrink database which includes…a shrink model” where “The shrink model (317) is generated by modeling the relationship between the electron beam irradiation amount and the amount of change in the shape owing to shrink”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang to incorporate the teachings of Sekiguchi to observe the shape at the point observed with the CD-SEM without causing any damage and with high accuracy (Sekiguchi paragraph 0097). Regarding claim 4, Wang discloses the image restoration system claimed in claim 1. However, Wang fails to disclose predicting a deformation amount between respective prediction images on a basis of deformation amount data stored in advance in a deformation amount database. In related art, Sekiguchi discloses predicting a deformation amount between respective prediction images on a basis of deformation amount data stored in advance in a deformation amount database (Sekiguchi paragraphs 0015, 0059: “a shrink database which includes…a shrink model” where “The shrink model (317) is generated by modeling the relationship between the electron beam irradiation amount and the amount of change in the shape owing to shrink”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang to incorporate the teachings of Sekiguchi to observe the shape at the point observed with the CD-SEM without causing any damage and with high accuracy (Sekiguchi paragraph 0097). Regarding claim 8, Wang discloses the image restoration system claimed in claim 1, wherein the processor is further configured to: perform training processing of an image restoration model (Wang page 5, RHC, third paragraph under 4.1 Setup: “The whole network with two models is implemented in PyTorch with a mini-batch of size 16” where it is known in the art that PyTorch is mainly used to train machine learning models on a processor, such as a GPU, as evidenced by supporting NPL document “Automatic differentiation in PyTorch” cited by Wang – see Paszke page 3, first paragraph under Memory management), predict deformation occurring between the series of low quality images, and correct the first prediction image on a basis of the predicted deformation amount (as claimed in claim 1). However, Wang fails to explicitly disclose an image database that stores the series of low quality images and an imaging condition. In related art, Sekiguchi discloses an image database that stores the series of low quality images and an imaging condition (Sekiguchi paragraph 0039: “The measurement conditions and measurement point are stored in the memory 232 of the control process unit 230 together with the CD-SEM images of the measured secondary electron 220”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Wang to incorporate the teachings of Sekiguchi to allow estimation of the cross-sectional shape and the dimension before shrink, which cannot be observed by the CD-SEM (Sekiguchi paragraph 0066). Regarding claim 15, it is the corresponding method whose limitations are encompassed by the system claimed in claim 2. Therefore, Wang, modified by Sekiguchi, discloses the limitations of claim 15 as it does the limitations of claim 2. Regarding claim 17, it is the corresponding method whose limitations are encompassed by the system claimed in claim 4. Therefore, Wang, modified by Sekiguchi, discloses the limitations of claim 17 as it does the limitations of claim 4. Regarding claim 21, it is the corresponding method whose limitations are encompassed by the system claimed in claim 8. Therefore, Wang, modified by Sekiguchi, discloses the limitations of claim 21 as it does the limitations of claim 8. Response to Arguments Applicant's arguments have been fully considered but they are not persuasive. Regarding the argument that “None of the cited references disclose the subject matter of claim 1, as amended”, Wang discloses the limitations of amended claim 1, as delineated above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lehtinen et al. (NPL “Noise2Noise: Learning Image Restoration without Clean Data”) discloses Noise2Noise is a convolution neural network (Lehtinen page 11, A.1. Network architecture; Table 2). Kingma et al. (NPL “ADAM: A method for stochastic optimization”) discloses the algorithm ADAM minimizes a stochastic objective function by iteratively updating parameters, returning the resulting parameters (Kingma page 2, 2 Algorithm; Algorithm 1). Paszke et al. (NPL “Automatic differentiation in PyTorch”) discloses the main use case for PyTorch is training machine learning models on GPU (Paszke page 3, first paragraph under Memory management). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE ZHAO whose telephone number is (703)756-5986. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.Z./Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Feb 14, 2023
Application Filed
Jun 02, 2025
Non-Final Rejection — §103, §112
Aug 21, 2025
Response Filed
Nov 12, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536695
TRENCH PROFILE DETERMINATION BY MOTION
2y 5m to grant Granted Jan 27, 2026
Patent 12524883
Systems and Methods for Assessing Cell Growth Rates
2y 5m to grant Granted Jan 13, 2026
Patent 12518391
SYSTEM AND METHOD FOR IMPROVING IMAGE SEGMENTATION
2y 5m to grant Granted Jan 06, 2026
Patent 12511900
System and Method for Impact Detection and Analysis
2y 5m to grant Granted Dec 30, 2025
Patent 12493946
APPARATUS AND METHOD FOR VERIFYING OPTICAL FIBER WORK USING ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+58.3%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month