Prosecution Insights
Last updated: April 19, 2026
Application No. 17/801,424

IMAGE GENERATION DEVICE, IMAGE GENERATION METHOD, RECORDING MEDIUM GENERATION METHOD, LEARNING MODEL GENERATION DEVICE, LEARNING MODEL GENERATION METHOD, LEARNING MODEL, DATA PROCESSING DEVICE, DATA PROCESSING METHOD, INFERENCE METHOD, ELECTRONIC DEVICE, GENERATION METHOD, PROGRAM AND NON-TEMPORARY COMPUTER READABLE MEDIUM

Final Rejection §102§103
Filed
Aug 22, 2022
Examiner
AHSAN, UMAIR
Art Unit
2647
Tech Center
2600 — Communications
Assignee
Sony Semiconductor Solutions Corporation
OA Round
4 (Final)
68%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
274 granted / 400 resolved
+6.5% vs TC avg
Strong +33% interview lift
Without
With
+32.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
45 currently pending
Career history
445
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 400 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.All claims have priority date of 03/05/2020. Response to Amendment Office action in response to amendment entered 2/10/2026. Claims 1, 9-10 are amended, claims 11-24 previously canceled; claims 1-10 and 25-30 remain pending in this application. Amendment entered 2/10/2026 overcomes all pending issues of 35 USC 112(a) and 112(b) for all claims. Response to Arguments Applicant's arguments filed 2/10/2026 have been fully considered but they are not persuasive. Regarding claims 1, 9 and 10: Applicant submits Collins fails to teach "generating, by the processor, a second image from the CG model using a second imaging model, wherein the second imaging model simulates at least one physical characteristic of a sensor in the environment, wherein the second image is an ideal image relative to the first image" and "generating metadata of the first image and the second image for training an artificial intelligence (AI) model to output an ideal image when a deteriorated image is input" as recited in claim 1. Examiner respectfully disagrees. Applicant’s main argument is that Collins's rendering of 2D images with multiple "lighting arrangements" and "views" differs from the claimed "processing on the CG model" to generate "a second image from the CG model...wherein the second image is an enhanced image of the first image." There is no evidence that Collins uses two imaging models applied to the same CG model to ultimately generate an enhanced image. Examiner notes first that applicant remarks were without the updated rejection which maps the grounds from Collins. So examiner points applicant to the updated rejection appearing in the next section. Applicant’s remarks are conclusory as applicant just asserts without explanation that “There is no evidence that Collins uses two imaging models applied to the same CG model to ultimately train an Al model to output an ideal image in response to a deteriorated image that is input.” In response, examiner notes that Collins (e.g. ¶7, 9) clearly teach applying two imaging models (‘lighting arrangement’s or ‘views’) to the same CG model (‘rending engine’). For these reasons the rejections are maintained as 102/103 with Collins in view of the amended limitations. Applicant remarks regarding HIASA NORITO and the combination of Collins and Haisa Norito are moot in view of the teachings of Collins for the aforementioned limitations. And further as the remarks rely on features which are not positively recited in the claims. Claim Rejections - 35 USC § 102 & 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-10, 25-27, and 29-30 are rejected under 35 U.S.C. 102(a)(1) as anticipated by or, in the alternative, under 35 U.S.C. 103 as obvious over US 20200005083 A1 Collins; Robert. Consider Claims 1, 9 and 10 Collins teaches An image generation method (Collins Fig. 1, Fig. 4, [0007] "a computer implemented method for generating a dataset having a multiplicity of corresponding images ... ), comprising: acquiring a computer graphics (CG) model (¶15 “a 3D rendering engine”), including a representation of a three-dimensional (3D) object in an environment (Collins Fig. 1, Fig. 4, ¶15 " ... loading a further seed model into a 3D rendering engine"; [0044] "The seed model 110 may be any convenient data representation of the object to be recognized, although CAD data is preferred ... “); and performing, by a processor, processing on the CG model (¶7 “a 3D rendering engine”) using a first imaging model ([0007] “selecting a first lighting arrangement”) to generate a first image (Collins Fig. 1, Fig. 4 [0007] " ... rendering the first view of the seed model with the first lighting arrangement... storing a 2D image of the first view of the seed model… “) wherein the first image is a deteriorated image (¶65 “the properties of the virtual camera 160 may comprise one or more camera artefacts. This may improve recognition accuracy and/or reliability as real-world problems may be anticipated. These artifacts may include, for example, blur, lighting flare, defocus, translation, rotation, scaling, obscuration, noise, motion, pixelization, edge clarity, aliasing, image compression, color banding, grain, quantization, macro blocking, mosaicking (or mosaicking), ringing, posterizing, and any combination thereof. It may be an artefact related to, for example, the dimensions and position of the camera, the optical properties, the sensor properties, the data compression, or any combination thereof.”); generating, by the processor, a second image from the CG model using a second imaging model ([0007]-0009] “..repeating the rendering and storing using a plurality of further views and a plurality of further lighting arrangements to generate a multiplicity of 2D images of the seed model; and generating a dataset comprising the corresponding multiplicity of 2D images of the seed model..”), wherein the second imaging model simulates at least one physical characteristic of a sensor in the environment (Collins [0009] “..selecting one or more camera artefacts; rendering the seed model with the one or more camera artefacts; storing a 2D image of the seed model with the one or more camera artefacts..”), wherein the second image is an ideal image relative to the first image (Collins ¶56-57 “The virtual light source 150 is positioned at a second position to define a second lighting arrangement 150. In this example, the view 160 is kept the same as in FIG. 3c.” and Figs 3B and 3C where images from the same view with different lighting arrangements, where one image teaches an “ideal image” relative to the other “deterioraed image”; Examiner note: Compare instant specification criteria of “examples of ideal images and normal images (deteriorated images)” given in ¶¶115—156 to criteria for generating images with different lighting arrangements and artefacts given in Collins ¶¶50, 65, 67--79); generating metadata of the first image and the second image for training an artificial intelligence (AI) model (Collins [0011] "associating a 2D image of the seed model with one or more corresponding labels"; the labels are considered to be metadata; [0010]-[0012] “..One or more labels, also called annotations or tags, may be used to inform the machine vision learning system what the object to be recognized should be. This may speed up the training of the vision system…”; [0095]-[0096] “..These labels may simply indicate the classification, such as dog, car, boat, or the labels may also include data about, for example, the views 160, lighting arrangement 150, background 170, camera artefact or any combination thereof. Any conventional metadata format may be used. Preferably a format is used which preserves the context of the label—for example, teal may be a color or a species of duck, flare may be an optical artefact or a signal flash…may be based on the Exchangeable Image File format (EXIF), as this already includes standard tags, or labels,..”; ¶17 “providing the dataset to a machine learning algorithm for use as a training set, a test set and/or a validation set to recognize the object”) to output an ideal image when a deteriorated image is input (Collins, ¶¶10-12, 17, disclosed structure of using the images for training machine vision AI models sufficient to meet intended function/environment for AI model to output ideal image from deteriorated image). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, to modify the invention of Collins to include the intended use of an AI model to output an ideal image when a deteriorated image is input in order to “provid[e] the dataset to a machine learning algorithm for use as a training set [or] a test set.” Collins ¶17. Collins also teaches, regarding claims 9 and 10, and in the same manner as the method, An image generation device, comprising a processor and A non-temporary computer readable medium storing a program that executes an image generation method when a processor is executed (Collins Fig. 1, Fig. 4, [0027] “..the method may be implemented on any type of standalone system or client-server compatible system containing any type of client, network, server, and database elements..”). Consider Claims 2 and 25. Collins teaches The image generation method according to claim 1, further comprising: selecting at least one parameter for processing the CG model or the artificial image (Collins [0007] “..selecting a first lighting arrangement; selecting a first view of the seed model..”); and applying to the CG model or the artificial image based on the selected parameter at a timing at which the CG model or the artificial image is generated (Collins [0007] “..rendering the first view of the seed model with the first lighting arrangement; storing a 2D image of the first view of the seed model with the first lighting arrangement…”). Consider Claims 3 and 26. Collins teaches The image generation method according to claim 2, wherein the at least one parameter is a parameter related to the sensor (Collins [0053] “..The seed model 110 may be rendered using a plurality of virtual views 160—for each view 160, a virtual camera 160 is used to determine the view 160..” additionally view and lighting arrangement relate to a sensor i.e. camera as broadly claimed by the term “related”; SEE ALSO [0065]-[0066] “..the properties of the virtual camera 160 may comprise one or more camera artefacts..”). Consider Claims 4 and 27. Collins teaches The image generation method according to claim 3, wherein the sensor includes at least a camera (Collins [0065]-[0066] camera). Consider Claim 6 and 29. Collins teaches The image generation method according to claim 1, further comprising recording metadata of the second image or the artificial image in a storage medium (Collins [0095]-[0098] disclosing various formats of storage of images with labels, tags, metadata). Consider Claim 7 and 30. Collins teaches The image generation method according to claim 6, wherein the metadata of the second image or the first image is associated with the artificial image and recorded in a storage medium (Collins [0095]-[0098] disclosing various formats of storage of images with labels, tags, metadata). Consider Claim 8. Collins teaches A recording medium generation method, comprising storing an image generated by the image generation method according to claim 1 in a recording medium (Collins [0007], [0082] storing images and storing the seed model and [0095]-[0098] disclosing various formats of storage of images with labels, tags, metadata). Claim(s) 5 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over US 20200005083 A1 Collins; Robert in view of JP 2020030569 A HIASA NORITO. Consider Claims 5 and 28. Collins teaches The image generation method according to claim 4 (See claim 4 above). Collins does not explicitly disclose wherein the Al used for the image acquired by the sensor is used to correct a change in the image caused by the sensor or the camera. HIASA NORITO teaches wherein the Al used for the image acquired by the sensor is used to correct a change in the image caused by the sensor or the camera (Pg. 8 Paragraph 4 “..thereby learning a CNN that enables robust correction of the magnitude of the blur. This makes it possible to simultaneously correct small defocus blur of an image having a large depth of field…”; SEE ALSO Pg. 1 Paragraph 1 ABSTRACT “An image processing method includes the steps of: acquiring a first image obtained by imaging a subject space through a first pupil of an optical system and a second image obtained by imaging the subject space through a second pupil different from the first pupil of the optical system S101; and generating a sharpened image obtained by correcting blur depending on the pupils of the optical system on the basis of the first image and the second image by using a multilayer neural network..”) It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art, to modify the invention of Collins to include the noted teachings of HIASA NORITO in order to obtain sharper images (HIASA NORITO Pg. 1 Paragraph 1 ABSTRACT) Pertinent Prior Art(s) The prior art made of record though not relied upon in the current rejection is considered pertinent to applicant's disclosure: ROS GERMAN ET AL: "The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes", 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE, 27 June 2016 (2016-06-27), pages 3234-3243, XP033021505 RONG JIANGPENG ET AL: "Radial Lens Distortion Correction Using Convolutional Neural Networks Trained with Synthesized Images", LECTURE NOTES IN COMPUTER SCIENCE; PAGE(S) 35 - 49, XP04 7 407322, WO 2018/066351 Al (ADVANCED DATA CONTROLS CORP.) 12 April 2018 (2018-04-12), paragraphs [0056], [0074], [0075], [0080] & US 2019/0220029 Al, paragraphs [0114], [0133], [0134], [0139] Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to UMAIR AHSAN whose telephone number is (571)272-1323. The examiner can normally be reached Monday - Friday 10-5 PM EST or by emailing UMAIR.AHSAN@USPTO.GOV. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alison Slater can be reached on (571) 270-0375. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UMAIR AHSAN/Primary Examiner, Art Unit 2647
Read full office action

Prosecution Timeline

Aug 22, 2022
Application Filed
Mar 06, 2025
Non-Final Rejection — §102, §103
Jun 17, 2025
Response Filed
Jul 29, 2025
Final Rejection — §102, §103
Oct 22, 2025
Response after Non-Final Action
Oct 31, 2025
Request for Continued Examination
Nov 07, 2025
Response after Non-Final Action
Nov 28, 2025
Non-Final Rejection — §102, §103
Feb 10, 2026
Response Filed
Mar 17, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597952
Antenna Tuning For Wireless Links
2y 5m to grant Granted Apr 07, 2026
Patent 12596169
METHOD AND SYSTEM FOR LOCATING OBJECTS WITHIN A MASTER SPACE USING MACHINE LEARNING ON RF RADIOLOCATION
2y 5m to grant Granted Apr 07, 2026
Patent 12571871
METHOD AND SYSTEM FOR LOCATING OBJECTS WITHIN A MASTER SPACE USING MACHINE LEARNING ON RF RADIOLOCATION
2y 5m to grant Granted Mar 10, 2026
Patent 12554004
BACKSCATTER-BASED POSITIONING
2y 5m to grant Granted Feb 17, 2026
Patent 12536614
SYSTEMS AND METHODS FOR FORM RECOGNITION USING VISUAL SIGNATURES
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+32.9%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 400 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month