Prosecution Insights
Last updated: April 19, 2026
Application No. 19/106,776

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

Non-Final OA §103
Filed
Feb 26, 2025
Examiner
TRAN, TRANG U
Art Unit
2422
Tech Center
2400 — Computer Networks
Assignee
Hamamatsu Photonics K K
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
719 granted / 915 resolved
+20.6% vs TC avg
Strong +16% interview lift
Without
With
+15.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
20 currently pending
Career history
935
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
45.9%
+5.9% vs TC avg
§102
35.2%
-4.8% vs TC avg
§112
2.7%
-37.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 915 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Ozcan et al. (US 2022/0114711 A1) in view of Takeshima Tomochika et al. (JP 2021071936 A). In considering claim 1, Ozcan et al. discloses all the claimed subject matter, note 1) the claimed a processing unit configured to input an input image to a convolutional neural network, and output an output image from the convolutional neural network is met by x is the low-resolution input image 20 to the generator network 120, g(x) is the network output image and the loss is computed using g(x) (Fig. 35A, page 22, paragraph #0197 to page 23, paragraph #0200), and 2) the claimed a training unit configured to use an evaluation function including an error evaluation term representing an evaluation value related to an error between the output image and the target image ( the L.sub.1 loss is the mean pixel difference between the generator's output 124 and the ground truth image) and a regularization term representing an evaluation value related to a difference of pixel values between adjacent pixels in the output image (the formula [0201] calculates the anisotropic total variation loss using differences between adjacent pixels), and train the convolutional neural network based on a value of the evaluation function is met by training the deep neural network 10 based on the overall loss function for the generator network (Fig. 35A, page 23, paragraph #0199 to paragraph #0203). However, Ozcan et al. explicitly do not disclose the claimed wherein the output image after respective processes of the processing unit and the training unit are repeatedly performed a plurality of times is set as an image after the noise reduction processing. Takeshima Tomochika et al. teach that the first processing unit 10 repeatedly learns a convolutional neural network (CNN) by using a random noise image Bn as an input image and the target image A as a teaching image for each of N pieces of random noise images B1-BN, and acquires an image output from the CNN after the repeated learning as an intermediate image Cn (Fig. 1, see the abstract). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the repeatedly performed training as taught by Takeshima Tomochika et al. into Ozcan et al.’s system in order to effectively reduce noise of the target image even when only one target image exists or even when an SN ratio of the target image is low. In considering claim 2, the claimed wherein the target image is a tomographic image of a subject created based on coincidence information collected by using a radiation tomography apparatus is met by the tomographic image (page 1, lines 17-26 of Takeshima Tomochika et al.). The motivation to combine the references has been discussed in claim 1 above. In considering claim 3, the claimed wherein the processing unit is configured to input an image representing morphological information of the subject to the convolutional neural network as the input image is met by the TIRF-SIM images that undergo rapid morphological changes during development (Figs. 31A-31O, page 5, paragraph #0060 of Ozcan et al.). The motivation to combine the references has been discussed in claim 1 above. In considering claim 4, the claimed wherein the processing unit is configured to input an MRI image of the subject to the convolutional neural network as the input image is met by the MRI image (page 1, lines 17-26 of Takeshima Tomochika et al.). The motivation to combine the references has been discussed in claim 1 above. In considering claim 5, the claimed wherein the processing unit is configured to input a CT image of the subject to the convolutional neural network as the input image is met by the CT image (page 1, lines 17-26 of Takeshima Tomochika et al.). The motivation to combine the references has been discussed in claim 1 above. In considering claim 6, the claimed wherein the processing unit is configured to input a static PET image of the subject to the convolutional neural network as the input image is met by the PET image (page 1, lines 17-26 of Takeshima Tomochika et al.). The motivation to combine the references has been discussed in claim 1 above. In considering claim 7, the claimed wherein the processing unit is configured to input a random noise image to the convolutional neural network as the input image is met by the random noise image B (Fig. 1, page 2, lines 26-44 of Takeshima Tomochika et al.). The motivation to combine the references has been discussed in claim 1 above. Method claims 8-14 are rejected for the same reason as discussed in apparatus claims 1-7 above, respectively. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Anastasio et al. (US 2021/0150779 A1) disclose deep learning-assisted image reconstruction for tomographic imaging. Matsuura et al. (US 2020/0311878 A1) disclose apparatus and method for image reconstruction using feature-aware deep learning. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRANG U TRAN whose telephone number is (571)272-7358. The examiner can normally be reached M-F 10:00AM- 6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN W. MILLER can be reached at 571-272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. February 24, 2026 /TRANG U TRAN/Primary Examiner, Art Unit 2422
Read full office action

Prosecution Timeline

Feb 26, 2025
Application Filed
Feb 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603986
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12598288
METHOD AND DEVICE FOR DETECTING POWER STABILITY OF IMAGE SENSOR
2y 5m to grant Granted Apr 07, 2026
Patent 12596077
Passive Camera Lens Smudge Detection
2y 5m to grant Granted Apr 07, 2026
Patent 12591995
METHOD AND APPARATUS FOR DEFORMATION MEASUREMENT, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12576717
DRIVING ASSISTANCE APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.9%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 915 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month