Prosecution Insights
Last updated: April 19, 2026
Application No. 18/713,275

IMAGE CONVERSION DEVICE, CONTROL METHOD FOR IMAGE CONVERSION DEVICE, AND MEDIUM

Non-Final OA §102
Filed
May 24, 2024
Examiner
FLORA, NURUN N
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Kyocera Corporation
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
331 granted / 387 resolved
+23.5% vs TC avg
Minimal +1% lift
Without
With
+1.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
24 currently pending
Career history
411
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
9.6%
-30.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 387 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 8-10 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Sethi et al. (US 20210142042 A1, hereinafter Sethi). Regarding claim 1, Sethi discloses an image conversion device (abstract, 100 fig. 1, 400, fig. 4, 1000, fig. 10 etc.) comprising: an acquiring unit configured to acquire an input image in a first image style comprising a target portion of a first target object (At 602, an input image is received to modify based on color features of a reference image. For example, the color editing system 104 implemented by the device 102 can receive the input image 108 that has color features a user of the device wants to visually enhance by color matching; step 602… At 604, faces are detected in the input image and in the reference image; fig. 6, seps 602-604; ¶0096-0097. Face 116 is the object in the input image 108, e.g. fig. 1); a generating unit comprising (124+126, fig. 1 e.g. ¶0042) (1) a neural network configured to generate, from the input image, a target image in a second image style different from the first image style (Generally, “color matching” is described herein as applying or transferring the color features, style, and brightness of the reference image to the input image to generate or create a modified image that has enhanced color, style, and brightness features. As used herein, the terms “color features” and “color style” are used interchangeably and are intended to be inclusive of the many visual aspects that come together to represent the overall appearance of a digital image. For example, the color style of a digital image may include any number of color features and image appearance factors, such as color, lighting, brightness, highlights, shadows, color midtones, contrast, saturation, tone, and any other type of color features and appearance factors, ¶0023. The color matching using the midtone color features of the reference image 110 applied to the input image 108, based on the similar average skin tone values of the face group pair, is effective to maintain the original captured image color of the faces in the input image. For example, by using the reference image face group, which includes the face of the person corresponding to the image region 118 in the reference image 110, the skin tone of the person in the input image 108, as originally captured in the input image, is not altered (e.g, is not changed to a lighter or darker skin tone) in the modified image 112. Given that people are generally sensitive to changes in the appearance of their skin tone in digital images and/or in video frames in a video clip, for example, the color editing system 104 can determine whether or not to use face skin tones of the respective faces in the face group pair of the input image face group and the reference image face group as part of the color features applied to modify the input image. …For example, if the average skin tone difference between the input image face group and the reference image face group exceeds the matching threshold for face group matching, the image regions that include the faces may not be used for the color matching so as not to alter the appearance of the skin tone of persons in the input image when the color features, style, and/or brightness of the reference image are applied to the input image, ¶0051-0052. At 614, a modified image is generated from the input image based on the color features of the reference image. For example, the color matching module 126 of the color editing system 104 implemented by the device 102 can generate the modified image 112 from the input image 108 based on the color features of the reference image 110, where the color features include using the face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image, ¶0104, fig. 6), and (2) a color tone information controller configured to acquire color tone information indicating a color tone of the target portion in a reference image in the second image style comprising the target portion of a second target object different from the first target object, the target portion of the second target object corresponding to the target portion of the first target object, and input the color tone information to the neural network (ibid, Abstract, ¶0023, 0052, ¶0104, figs. 1, 6-9. Also see, ¶0042-0086); an input controller configured to input the input image and the reference image to the generating unit (The color matching module 126 of the color editing system 104 can then generate the modified image 112 from the input image 108 based on the color features of the reference image 110, where the color features can include using the face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image, ¶0050); and an output controller configured to control output of the target image generated by the generating unit (In this example 100, the glare from the very bright, natural-light windows shown in the input image 108 has been toned down in the modified image 112. In implementations, the color matching module generates the modified image 112 utilizing the midtone color features of the reference image 110 applied to the input image 108, where the face skin tones in the face group pair can be used as part of the midtone color features for the color matching, ¶0050). Regarding claim 8, Sethi discloses the image conversion device according to claim 1, wherein the first target object and the second target object are living beings, and the target portion is entire body, joint, skin, face, eye, nose, mouth, ear, and/or hair (face of a human or faces of people, fig. 1, Abstract). Regarding method claim(s) 9, although wording is different, the material is considered substantively equivalent to the device claim(s) 1 as described above. Regarding claim 10, Sethi discloses a non-transitory computer-readable medium storing a control program for causing a computer to operate as the image conversion device described in claim 1, the control program causing the computer to operate as the acquiring unit, the generating unit, the color tone information controller, the input controller, and the output controller (¶0132-0134). 11. (canceled) Claims 1, 4-6, 8-10 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Sullivan et al. (US 20200035350 A1, hereinafter Sullivan). Regarding claim 1, Sullivan discloses an image conversion device (abstract, figs. 1-7) comprising: an acquiring unit configured to acquire an input image in a first image style comprising a target portion of a first target object (an implicit unit configured to acquire histological image 710, e.g. medical imaging device 100 as in Fig. 1, ¶0030-34, the target portions being tissue in a breast of a patient; also in Fig. 2, histological image 210, fig. 7); a generating unit comprising (image generation unit 360, ¶0064, fig. 7) (1) a neural network configured to generate, from the input image, a target image in a second image style different from the first image style (image generating unit 360, implemented as disclosed in ¶0066-68 using generative networks for style transfer, fig. 7), and (2) a color tone information controller configured to acquire color tone information indicating a color tone of the target portion in a reference image in the second image style comprising the target portion of a second target object different from the first target object, the target portion of the second target object corresponding to the target portion of the first target object, and input the color tone information to the neural network (¶0066 and further ¶s0038, 49, 66, wherein the display characteristics of the target histological image are the claimed color tone information of a reference image, and the display characteristics are applied to the corresponding regions of the histological image, which is the claimed input image); an input controller configured to input the input image and the reference image to the generating unit (predictive model 620, which provides at least one target histological image, the claimed reference image, based upon the histological image, the claimed input image, to image generating unit 360, ¶s 0036-37; the image processing device then works with the histological image and the target histological image, what implies the claimed input controller, ¶s 0037-38, fig. 7); and an output controller configured to control output of the target image generated by the generating unit (an implicit unit configured to output modified image 720 from processor 305; also Fig. 2, the generated modified image 120, fig. 7). Regarding claim 4, Sullivan discloses, the conversion device according to claim 1, wherein the input image is a three-dimensional image simulating a stereoscopic shape of the target portion of the second target object, and the reference image is a two-dimensional image that is a captured image of the first target object (Sullivan also discloses that both 3D and 2D images can be used in various combinations and that references image data can be partial (segmented) data, ¶s 0030-33, 38). Regarding claim 5, Sullivan discloses, the conversion device according to claim 1, wherein the reference image is a partial image obtained by extracting a region of the target portion from an entire image obtained by capturing entirety of the first target object, the input image is an image of the target portion of the second target object, and the generating unit further generates a composite image by combining the target image with the region of the entire image corresponding to the target portion (Sullivan also discloses that both 3D and 2D images can be used in various combinations and that references image data can be partial (segmented) data, ¶s 0030-33, 38). Regarding claim 6, Sullivan discloses, the conversion device according to claim 1, wherein the first target object and the second target object are living beings with a disease in the target portion, and the disease is more progressed in the second target object than in the first target object (imaging of a breast of a patient subjected to an intervention, which implies a disease, Abstract, ¶0030-0038, 0064. The reference images are corresponding). Regarding claim 8, Sullivan discloses the image conversion device according to claim 1, wherein the first target object and the second target object are living beings, and the target portion is entire body, joint, skin, face, eye, nose, mouth, ear, and/or hair (Abstract, target object is human, and target portion is breast). Regarding method claim(s) 9, although wording is different, the material is considered substantively equivalent to the device claim(s) 1 as described above. Regarding claim 10, Sethi discloses a non-transitory computer-readable medium storing a control program for causing a computer to operate as the image conversion device described in claim 1, the control program causing the computer to operate as the acquiring unit, the generating unit, the color tone information controller, the input controller, and the output controller (¶0016, ¶0068, fig. 8, ¶0070, fig. 9, claim 18 and dependents). 11. (canceled) Allowable Subject Matter Claims 2-3, 7 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding claim 2, prior arts of record taken alone or in combination fails to reasonably disclose or suggest, wherein the neural network comprises: a first generator configured to generate a first converted image in the second image style from a first input image in the first image style; a second generator configured to generate a second converted image in the first image style from a second input image in the second image style; a first identifier configured to be capable of identifying an image in the first image style based on a shape and a color tone of the target portion; and a second identifier configured to be capable of identifying an image in the second image style based on the shape and the color tone of the target portion, the first generator is further capable of generating a third converted image in the second image style from the second converted image, the second generator is further capable of generating a fourth converted image in the first image style from the first converted image, the first identifier identifies the image in the first image style based on a first color tone error between the color tone information of the target portion in the first input image and the color tone information of the target portion in the reference image in the second image style, and at least one of the group consisting of (1) a first error related to a shape of the target portion between the first input image and the fourth converted image, (2) a second error related to the shape of the target portion between the second converted image and the image in the first image style, and (3) a sixth error related to the shape of the target portion between a second evaluation image generated when the second converted image is input to the second generator and the second converted image, and the second identifier identifies the image in the second image style based on a second color tone error between the color tone information of the target portion in the second input image and the color tone information of the target portion in the reference image in the first image style, and at least one of the group consisting of (1) a fourth error related to the shape of the target portion between the second input image and the third converted image, (2) a fifth error related to the shape of the target portion between the first converted image and the image in the second image style; and (3) a third error related to the shape of the target portion between a first evaluation image generated when the first converted image is input to the first generator and the first converted image. Regarding claim 7, prior arts of record taken alone or in combination fails to reasonably disclose or suggest, wherein the first target object and the second target object are living beings that have undergone an intervention on the target portion, and an elapsed period after the second target object has undergone the medical intervention is longer than an elapsed period after the first target object has undergone the intervention. Conclusion The prior and/or pertinent art(s) made of record and not relied upon is considered pertinent to applicant's disclosure, are, Duarte et al. (US 11900519 B2), Lee et al. (US 20230388446 A1), Dinh et al. (US 20230245285 A1), Pan et al. (US 20230100305 A1), Kim et al. (US 11551434 B2), Xie et al. (US 20220222796 A1), Hsiao (US 20220092728 A1), Liu et al. (US 20210358164 A1), who discloses different methods of generating images in different style domain using input image and a reference image using neural networks. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NURUN FLORA whose telephone number is (571)272-5742. The examiner can normally be reached M-F 9:30 am -5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NURUN FLORA/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592025
IMAGE RENDERING BASED ON LIGHT BAKING
2y 5m to grant Granted Mar 31, 2026
Patent 12586250
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12586254
High-quality Rendering on Resource-constrained Devices based on View Optimized RGBD Mesh
2y 5m to grant Granted Mar 24, 2026
Patent 12579751
TECHNIQUES FOR PARALLEL EDGE DECIMATION OF A MESH
2y 5m to grant Granted Mar 17, 2026
Patent 12561896
INSERTING THREE-DIMENSIONAL OBJECTS INTO DIGITAL IMAGES WITH CONSISTENT LIGHTING VIA GLOBAL AND LOCAL LIGHTING INFORMATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
87%
With Interview (+1.3%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 387 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month