Prosecution Insights
Last updated: April 19, 2026
Application No. 18/364,089

COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN CONTROL PROGRAM, CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS

Non-Final OA §103§112
Filed
Aug 02, 2023
Examiner
BENNETT, STUART D
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
54%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
245 granted / 355 resolved
+11.0% vs TC avg
Minimal -15% lift
Without
With
+-15.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
31 currently pending
Career history
386
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 355 resolved cases

Office Action

§103 §112
DETAILED ACTION The present Office action is in response to the application filing on 2 AUGUST 2023. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on 11/02/2022. It is noted, however, that applicant has not filed a certified copy of the foreign application as required by 37 CFR 1.55. Information Disclosure Statement The Information Disclosure Statement (IDS) submitted on 08/02/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the Information Disclosure Statement is being considered by the Examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: --COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN CONTROL PROGRAM, CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS FOR OBTAINING HIGH RESOLUTION THERMAL IMAGE--. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With regards to claims 1, 7, and 13, it is not clear how the third invisible light image maintains a higher resolution than the first invisible light image once magnified. The magnification would cause a loss on resolution and this appears supported by the equivalent third invisible light image in FIG. 3 being the thermal image 223 magnified and described as low resolution, just like the thermal image 221. In FIG. 2, the first low resolution invisible light image is 221 and the second high resolution light image is 222; however, as stated before, magnifying into the second high resolution light image 222 will result in a low resolution that is not higher in resolution than the first low resolution invisible light image 221. It is noted if the claim is intending to say the identified magnification area of the second high resolution light image 222 is a higher resolution than the equivalent area in the first low resolution light image 221, then this is true. The issue only presents itself once the image is magnified. For examination purposes, the limitation will be interpreted as magnifying up to a point where the third image can maintain a higher resolution than the second image. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 5-7, 11-13, 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2024/0265503 A1 (hereinafter “Kim”) in view of U.S. Publication No. 2023/0360247 A1 (hereinafter “Chew”). Regarding claim 1, Kim teaches a non-transitory computer-readable recording medium having stored therein a control program that causes a computer to execute a process ([0068], “a generic-purpose processor (e.g., a CPU or an application processor) capable of performing corresponding operations by executing one or more software programs stored in the memory device”) comprising: obtaining a first image of a given image capturing range captured by an image capturing device ([0087], “An image sensor for capturing visible light acquires a color image (first image) in the visible light region through red (R), green (G), and blue (B) pixels”) and a first invisible light image of the image capturing range captured by an invisible light image capturing device ([0087], “The image sensor for infrared imaging may acquire a thermal color map (second image) through pixels”) having a resolution lower than a resolution of the image capturing device ([0087], “since an infrared image sensor detects energy having a wavelength greater than that of visible light, the number of pixels, that is, the resolution, is inevitably low even if the sensor has the same size.” [0092], “one of the two different images may be a high-resolution visible light image, while the other is a low-resolution thermal image”); generating a second invisible light image at a resolution higher than a resolution of the first invisible light image by a machine learning model using the first image and the first invisible light image as an input ([0088], “the neural processing unit 100 may be a model trained to output a new third image by inputting the first image and the second image having different resolutions and image characteristics. The third resolution of the third image may have a value between the first resolution of the first image and the second resolution of the second image.” [0092], “the image fusion artificial neural network model 101 operated by the neural processing unit 100 may correspond to a generator configured to generate a new image (e.g., a high-resolution thermal image) by using as inputs two different images of one object. For example, one of the two different images may be a high-resolution visible light image, while the other is a low-resolution thermal image.” [0096], “the image fusion artificial neural network model can generate a third image in which thermal information (second image characteristics) of the second image is reflected while maintaining the size and resolution of the first image”); Kim fails to expressly disclose identifying an obtaining target area of an invisible light image from the image capturing range, based on an indicator indicating an uncertainty of each of a plurality of pixels included in the second invisible light image; and obtaining, by an optical magnification control of the invisible light image capturing device, a third invisible light image of the obtaining target area at a resolution higher than a resolution of the obtaining target area in the first invisible light image. However, Chew teaches identifying an obtaining target area of an invisible light image from the image capturing range, based on an indicator indicating an uncertainty of each of a plurality of pixels included in the second invisible light image ([0060], “To detect the foreign object 20, the method may include generating at least one attribute of the foreign object 20 in each of the thermal object image 110B and visible light object image 120B, comparing the at least one attribute of the foreign object 20 in the thermal object image 110B and the visible light object image 120B, such that the foreign object 20 is detected when the at least one attribute of the foreign object 20 in thermal object image 110B and the visible light object image 120B are the same or within a specified parameter or threshold level.” Note, the foreign object represents unknown pixel information); and obtaining, by an optical magnification control of the invisible light image capturing device, a third invisible light image of the obtaining target area at a resolution higher than a resolution of the obtaining target area in the first invisible light image ([0060], “System 100 may be configured to obtain an enlarged thermal object image 110B and an enlarged visible light object image 120B when the foreign object 20 is detected by zooming the visible light camera 120 and thermal camera 110 onto the detected foreign object 20.” Note, in combination with Kim, the zooming would be in the high-resolution thermal image). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have zoomed in on known pixel information, as taught by Chew ([0060]), in Kim’s invention. One would have been motivated to modify Kim’s invention, by incorporating Chew’s invention, to improve recognition of foreign objects (Chew: [0005-0006]). Regarding claim 5, Kim and Chew disclose every limitation of claim 1, as outlined above. Additionally, Kim discloses the process further comprising outputting a fourth invisible light image generated by replacing a partial image of the obtaining target area included in the second invisible light image with the third invisible light image, as an inference result of the machine learning model (FIG. 9 depicts a GAN image fusion artificial neural network model. [0189], “the GAN neural network structure configuring the image fusion artificial neural network model has a structure corresponding to a generator for generating a high-resolution thermal image. That is, the scheduler 130 of the neural processing unit 100 may be configured to process an inference operation”). Regarding claim 6, Kim and Chew disclose every limitation of claim 1, as outlined above. Additionally, Kim discloses wherein the image capturing device is a visible light image capturing device that captures visible light image, and the invisible light image capturing device is an infrared light image capturing device that captures an infrared light image ([0087], “An image sensor for capturing visible light acquires a color image (first image) in the visible light region through red (R), green (G), and blue (B) pixels. The image sensor for infrared imaging may acquire a thermal color map (second image) through pixels”). Regarding claim 7, the limitations are the same as those in claim 1. Therefore, the same rationale of claim 1 applies equally as well to claim 7. Regarding claim 11, the limitations are the same as those in claim 5. Therefore, the same rationale of claim 5 applies equally as well to claim 11. Regarding claim 12, the limitations are the same as those in claim 6. Therefore, the same rationale of claim 7 applies equally as well to claim 12. Regarding claim 13, the limitations are the same as those in claim 1. Therefore, the same rationale of claim 1 applies equally as well to claim 13. Regarding claim 17, the limitations are the same as those in claim 5. Therefore, the same rationale of claim 5 applies equally as well to claim 17. Regarding claim 18, the limitations are the same as those in claim 6. Therefore, the same rationale of claim 6 applies equally as well to claim 18. Claim(s) 4, 10, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2024/0265503 A1 (hereinafter “Kim”) in view of U.S. Publication No. 2023/0360247 A1 (hereinafter “Chew”), and further in view of U.S. Publication No. 2022/0053124 A1 (hereinafter “Zhou”). Regarding claim 4, Kim and Chew disclose every limitation of claim 1, as outlined above. Kim and Chew fail to expressly disclose the process further comprising training the machine learning model so that a difference between a partial image of the obtaining target area included in the second invisible light image and the third invisible light image is minimized. However, Zhou teaches the process further comprising training the machine learning model so that a difference between a partial image of the obtaining target area included in the second invisible light image and the third invisible light image is minimized ([0070], “to train the prediction model 133, a difference value is generated by comparing the first imaginary image with the predicted positions of the objects 501-508 at time=1 with the second imaginary image, which, as stated previously, acts as a ground truth. A difference value, which could be a loss value based on the loss function, is generated. Ultimately, the training of the prediction model 133 should be such so as to minimize the difference value, which would indicate that the predicted positions of the objects in an imaginary image as predicted by the prediction model 133 substantially match the actual positions of the objects of a ground truth imaginary image. As such, during training, one or more model weights 134 of the prediction model 133 are continuously adjusted to minimize this difference value, ultimately leading to the training of the prediction model 133.” Note, the teachings describe correcting parameters based on outputs (e.g., third image) with an expected result or ground truth (e.g., second image)). Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to have used a model to minimize a difference between an output and a ground truth, as taught by Zhou ([0070]), in Kim and Chew’s invention. One would have been motivated to modify Kim and Chew’s invention, by incorporating Zhou’s invention, to improve the model result by increasing the accuracy of the model parameters through training. Regarding claim 10, the limitations are the same as those in claim 4. Therefore, the same rationale of claim 4 applies equally as well to claim 10. Regarding claim 16, the limitations are the same as those in claim 4. Therefore, the same rationale of claim 4 applies equally as well to claim 16. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to STUART D BENNETT whose telephone number is (571)272-0677. The examiner can normally be reached Monday - Friday from 9:00 AM - 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STUART D BENNETT/Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Aug 02, 2023
Application Filed
Oct 18, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574559
ENCODER, A DECODER AND CORRESPONDING METHODS FOR ADAPTIVE LOOP FILTER ADAPTATION PARAMETER SET SIGNALING
2y 5m to grant Granted Mar 10, 2026
Patent 12568300
ELECTRONIC APPARATUS, METHOD FOR CONTROLLING ELECTRONIC APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR GUI CONTROL ON A DISPLAY
2y 5m to grant Granted Mar 03, 2026
Patent 12563191
CROSS-COMPONENT SAMPLE OFFSET
2y 5m to grant Granted Feb 24, 2026
Patent 12542925
METHOD AND DEVICE FOR INTRA-PREDICTION
2y 5m to grant Granted Feb 03, 2026
Patent 12542934
ZERO-DELAY PANORAMIC VIDEO BIT RATE CONTROL METHOD CONSIDERING TEMPORAL DISTORTION PROPAGATION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
54%
With Interview (-15.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 355 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month