Prosecution Insights
Last updated: April 19, 2026
Application No. 18/555,441

APPARATUS AND METHOD OF VARIABLE IMAGE PROCESSING

Non-Final OA §101§103
Filed
Oct 13, 2023
Examiner
TERRELL, EMILY C
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Mindtech Global Limited
OA Round
1 (Non-Final)
59%
Grant Probability
Moderate
1-2
OA Rounds
2y 8m
To Grant
94%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
316 granted / 537 resolved
-3.2% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
18 currently pending
Career history
555
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
54.8%
+14.8% vs TC avg
§102
20.9%
-19.1% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 537 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-16 are pending. Priority Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d). Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 01/07/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information referred to therein has been considered by the examiner. Title The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. See MPEP 606.01 [R-2] Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. Claim 16 is drawn to a “computer-readable storage medium comprising instructions”. Using the word “comprising”, the claim defines computer readable medium as computer code which is non-statutory subject matter. The Examiner suggests to use “embodied with”, “storing” or “encoded with” instead. In addition, the specification does not preclude transitory signals by way of explicit definition. Given the broadest reasonable interpretation consistent with the specification and state-of-the-art, the full scope of the claimed " computer-readable storage medium" covers both transitory and non-transitory media. Transitory media includes signals, carrier waves, etc. on which executable code was recorded and from which computers acquired such code. Transitory media do not fall within the definition of a process, machine, manufacture, or composition of matter (In re Nuijten), and are therefore non-statutory. The examiner suggests clarifying the claim(s) to exclude such non-statutory signal embodiments, such as (but not limited to) by adding the modifier "non-transitory" to the claimed medium. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-16 are rejected under 35 U.S.C. 103 as being unpatentable over Zisimopoulos et al. (hereafter referred to as “Zisimopoulos”, US 2019/0164012), and in view of applicant admitted prior art (hereafter referred to as “AAPA”, see page 2 of the specification on conventional process of style transfer). Regarding claim 1, Zisimopoulos discloses a process (Fig. 7) for applying style images ISij (Style images 705) to at least one content image ICi (Virtual Image 735) containing entity classes i (i: 1, 2, ...M) (pg. [0091] “label-to-label style transfer, allowing region-based style transfer from style to content images”, the regions correspond to the claimed entity classes, see e.g. Fig. 11), wherein attributes of a plurality j of style images (ISij: ISi1, ISi2...... ISiN) (Fig. 7, units 715, 720 and 725, pg. [0106] the mean and the covariance matrix of feature vector Φs (i.e., “attributes”) of the style images), each containing entity classes i (i: 1, 2, ...M) (pg. [0091] “label-to-label style transfer”), are transferred to the content image ICi (Fig. 7, style transfer 730), the process comprising the steps, for each of the entity classes i (i: 1, 2, ...M) (multiple styles and multiple classes style transfer, see e.g., pg. [0109]-[0120]), of: down-sampling the at least one content image ICi, to derive a content feature vector FCi (Fig. 7, the encoder 710 encodes the virtual image 735; pg. [0106] the vectorized feature representation Φc corresponds to the claimed content feature vector); down-sampling the j style images ISij, to derive j style feature vectors (FSij: FSi1, FSi2,..., FSiN) (Fig. 7, the encoder 710 encodes the style images 705; pg. [0106] the vectorized feature representation Φs corresponds to the claimed style feature vector); stylizing the content feature vector FCi, by transferring attributes of the style feature vectors (FSij: FSi1, FSi2,..., FSiN) to the content feature vector FCi, to derive j stylized content feature vectors (FCSij: FCSi1, FCSi2,..., FCSiN) (Fig. 7, style transfer 730, pg. [0106], equation (8), ΦCS corresponds to the claimed stylized content feature vectors); inputting a plurality of variable blending factors (αij: αi0, αi1, αi2,..., αiN) (pg. [0106], equation (9), (1-α) and α in equation 9 correspond to claimed αi0 and αi1); combining a factor αi0 of the content feature vector FCi with a factor (αij: αi1, αi2,..., αiN) of each of the respective stylized content feature vectors (FCSij: FCSi1, FCSi2,..., FCSiN) to derive a blended feature vector Fi* (pg. [0106], equation (9), Φwct corresponds to the claimed blended feature vector); up-sampling the blended feature vector Fi* to generate a blended stylized content image ICSi (Fig. 7, decoder 717 performs up-sampling, styled virtual image 750 corresponds to the claimed blended stylized content image), wherein the stylizing step comprises transforming the content feature vector FCi, wherein the content feature vector FCi acquires a subset of the attributes of the style feature vector (FSij: FSi1, FSi2,..., FSiN) (pg. [0106] equations (7) and (8), “allows the coloring transform to inject the style into the feature representation Φc”). Even though Zisimopoulos discloses using encoder/decoder architecture for generating a styled image (Fig. 8 and pg. [0093]), and the symbols shown in Fig. 9 seems to indicate down-sampling by encoder and up-sampling by decoder, Zisimopoulos does not expressly disclose that the encoder performs down-sampling and the decoder performs up-sampling. However, as stated in AAPA, “conventional processes which typically (but not exclusively) use neural network architecture, comprising an "autoencoder” … The autoencoder has two major parts: an "encoder", which down-samples a given output in both the content image and style image to produce a compact "feature vector" for each, and a "decoder" which up-samples the compact feature vector of the original input images”. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to incorporate the well-known features as described in AAPA into Zisimopoulos’s system to yield the invention as described in claim 1. Regarding claim 2, Zisimopoulos in view of AAPA discloses the process of claim1, wherein the combining step comprises generating a weighted average of the content feature vector FCi and stylized content feature vectors FCSij using the plurality of variable blending factors (αij: αi0, αi1, αi2,..., αiN) as weighting factors (pg. [0106], equation (9)). Regarding claim 3, Zisimopoulos in view of AAPA discloses the process of claim1, wherein the combining step comprises combining a blending factor αi0 of the content feature vector FCi with a sum of the plurality of variable blending factors αij of the stylized content feature vector FCSij, according to the relation PNG media_image1.png 73 244 media_image1.png Greyscale (pg. [0106], equation (9), N=1, (1-α) and α correspond to claimed αi0 and αi1, respectively) Regarding claim 4, Zisimopoulos in view of AAPA discloses the process of claim 1, wherein the stylizing step comprises at least a transformation of coloring (pg. [0106], Whitening and Coloring Transform (WCT)). Regarding claim 5, Zisimopoulos in view of AAPA discloses the process of claim 4, wherein the attributes of the style feature vectors (FSij: FSi1, FSi2,..., FSiN) are statistical properties of the style feature vectors (FSij: FSi1, FSi2,..., FSiN) (pg. [0106], “transform Φc to approximate the covariance matrix of Φs”, “Prior to whitening, the mean is subtracted from the features Φc and the mean of Φs is added to Φcs after recoloring”. The mean and the covariance matrix of feature vector Φs correspond to the claimed attributes). Regarding claim 6, Zisimopoulos in view of AAPA discloses the process of claim 5, wherein the attributes of the style feature vectors (FSij: FSi1, FSi2,..., FSiN) are a mean and covariance of the style feature vectors (FSij: FSi1, FSi2,..., FSiN) (see analysis of claim 5). Regarding claim 7, Zisimopoulos in view of AAPA discloses the process of claim 1, but fails to expressly disclose computing a quality parameter Q of the blended stylized content image ICSi for a range of values of the plurality of variable blending factors (αij: αi0, αi1, αi2,..., αiN). However, as indicated in Zisimopoulos (pg. [0129]-[0130]), in order to achieve a more realistic result, different α values are selected for different segmentation labels (“a=0:8 for iris, cornea and skin, a=0:5 for the eye ball and a=0:3 for the tools”). Defining a parameter to measure quality of a processed image is well known and common practice in the art. To automatically assess how realistic a stylized eye is, for example, a preset quality measure is certainly required. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claim 7 from the teachings of Zisimopoulos in view of AAPA. Regarding claim 8, please refer to analysis of claim 7, finding an optimal α value so as to achieve the highest quality (e.g., the most realistic) would have been quite obvious to a person of ordinary skill in the art. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claim 8 from the teachings of Zisimopoulos in view of AAPA. Regarding claims 9 and 10, both IOU and FID are common metrics used to assess the quality of images created by a generative model (see e.g., section 3.4 of the Balint paper of 2020, cited but not relied upon). Using either of them to measure image quality involves only routine skill in the art. Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to which the claimed invention pertains to yield the invention as described in claims 9 and 10 from the teachings of Zisimopoulos in view of AAPA. Regarding claim 11, Zisimopoulos in view of AAPA discloses the process of claim 1, wherein a sum of the plurality of variable blending factors (αij: αi0, αi1, αi2,..., αiN) is equal to one (Zisimopoulos, pg. [0106], equation (9)). Regarding claim 12, Zisimopoulos in view of AAPA discloses the process of claim 11, wherein j=1, and wherein the inputting step comprises inputting a single blending factor αi1 and wherein the combining step comprises combining a proportion αi0 = (1- αi1) of the content feature vector FCi, with a proportion of the stylized content feature vector FCSi1 according to the relation Fi* = (1- αi1) FCi + αi1 FCSi1 ( Zisimopoulos, pg. [0106], equation (9)). Regarding claim 13, Zisimopoulos in view of AAPA discloses the process of claim 1, wherein the process is implemented by a computer (Zisimopoulos, Fig. 15). Claims 14-16 have been analyzed and are rejected for the same reasons as outlined above in the rejection of claim 1. Zisimopoulos’s system is computer-based. Processor(s) and storage(s) are the main building blocks of a computer system. Zisimopoulos’s system includes a GPU (pg. [0081]). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to LI LIU whose telephone number is (571)270-5363. The examiner can normally be reached on Monday-Friday, 8:00AM-4:30PM, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached on (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LI LIU/Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Oct 20, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586167
MEDICAL IMAGE PROCESSING APPARATUS AND MEDICAL IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12573072
SYSTEM AND METHOD FOR OBJECT DETECTION IN DISCONTINUOUS SPACE
2y 5m to grant Granted Mar 10, 2026
Patent 12561956
AFFORDANCE-BASED REPOSING OF AN OBJECT IN A SCENE
2y 5m to grant Granted Feb 24, 2026
Patent 12518397
AUTOMATED DETERMINATION OF A BASE ASSESSMENT FOR A POSE OR MOVEMENT
2y 5m to grant Granted Jan 06, 2026
Patent 12493960
USER INTERFACE FOR VISUALIZING DIFFERENCES BETWEEN MEDICAL IMAGE CONTOURINGS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
59%
Grant Probability
94%
With Interview (+35.4%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 537 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month