Prosecution Insights
Last updated: April 19, 2026
Application No. 18/554,763

IMAGE PROCESSING METHOD AND APPARATUS, AND COMPUTER READABLE STORAGE MEDIUM

Non-Final OA §103§112
Filed
Apr 17, 2024
Examiner
CESE, KENNY A
Art Unit
2663
Tech Center
2600 — Communications
Assignee
BEIJING WODONG TIANJUN INFORMATION TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
86%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
517 granted / 687 resolved
+13.3% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
48 currently pending
Career history
735
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 687 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) filed on 4/17/2024 contains Non-Patent Literature Documents 1 and 2 were not translated into English. The references were not considered. The information disclosure statement (IDS) filed on 10/10/2023 was considered and placed on the file of record by the examiner. The Non-Patent Literature Document 1 was not provided, therefore not considered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 3, 4, 21, and 22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The dependent claims 2-8, and 10-19 are rejected based on their dependency on indefinite independent claims 1, 21, and 22. The dependent claims do not overcome the vague and indefinite claims 1, 21, and 22. The following claims 1, 21, and 22 elements are vague and indefinite. The Examiner suggests the applicant clearly define what is generated and how it is generated; “generating multiple new style representations and updating the source domain content representations and the target domain style representations with an objective that the multiple new style representations, which are different from each other, are different from source domain style representations of the source domain images and the target domain style representations, and that images generated by combining the multiple new style representations and the source domain content representations are semantically consistent with the source domain images.” The following claim 3 elements are vague and indefinite; “wherein the style encoder comprises a style representation extraction network and a clustering module and the extracting the target domain style representations of the target domain images using a style encoder comprises: inputting the target domain images to the style representation extraction network to obtain basic style representations of the target domain images; and inputting the basic style representations of the target domain images to the clustering module for clustering to obtain representation vectors of clustering centers as the target domain style representations.” The following claim 4 elements are vague and indefinite. It is not clear how one condition is applied to multiple functions since loss functions each contain a convergence condition or threshold. Also, “the objective” lacks antecedent basis; “adjusting the new style representations according to the first loss functions, the second loss functions, and the third loss functions until a preset convergence condition corresponding to the objective is satisfied, to obtain the multiple new style representations.” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 21, 22 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (US 2022/0083807) in view of Zhang et al. (US 2020/0073968) “Zhang II.” Regarding claim 1, Zhang teaches an image processing method, comprising: obtaining source domain content representations of source domain images and target domain style representations of target domain images (see para. 0074, 0077, Zhang discusses Style Generative Adversarial Network); generating multiple new style representations and updating the source domain content representations and the target domain style representations with an objective that the multiple new style representations, which are different from each other, are different from source domain style representations of the source domain images and the target domain style representations (see figure 4, figure 5A, para. 0074, 0077, Zhang discusses Style Generative Adversarial Network that generates multiple versions of images. The Examiner notes that the claim language is vague and indefinite); generating first images by combining the multiple new style representations with the updated source domain content representations and generating second images by combining the updated target domain style representations with the updated source domain content representations (see para. 0077, 0098, Zhang discusses combining different versions of an image is combined to generate a combined image, the combined image or combined feature map is based on a sum of channels of feature maps from versions of an image); and training an object detection model using the first images, the second images and the source domain images to obtain the trained object detection model (see para. 0067-0068, Zhang discusses training dataset generated using a generative adversarial network (GAN) that generates synthetic images and an associated trained neural network that generates labels for synthetic images generated by the GAN. The neural network is trained using supervised learning, wherein training dataset includes an input paired with a desired output for an input). Zhang II teaches that images generated by combining the multiple new style representations and the source domain content representations are semantically consistent with the source domain images (see para. 0032, Zhang II discusses the semantic consistency across the sketch and image domains; see para. 0053, Zhang II discusses maintaining semantic consistency and visual similarity of intra-class instances across domains). Motivation to combine may be gleaned from the prior art considered. It would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to modify the invention of Zhang with Zhang II to derive at the invention of claim 1. The result would have been expected, routine, and predictable in order to perform object detection model training. The determination of obviousness is predicated upon the following: One skilled in the art would have been motivated to modify Zhang in this manner in order to improve object detection model training by allowing the network to have style and appearance diversity in images while maintaining same semantic content by generating images by combining style and source content, therefore creating an improved robust network. Furthermore, the prior art collectively includes each element claimed (though not all in the same reference), and one of ordinary skill in the art could have combined the elements in this manner explained using known engineering design, interface and/or programming techniques, without changing a fundamental operating principle of Zhang, while the teaching of Zhang II continues to perform the same function as originally taught prior to being combined, in order to produce the repeatable and predictable result of generating style and content images and combining the images to create an object detection model that avoids distortions during object localization at various styles such as lighting, weather, camera properties, etc. The Zhang and Zhang II systems perform image generation using a neural network, therefore one of ordinary skill in the art would have reasonable expectation of success in the combination. It is for at least the aforementioned reasons that the examiner has reached a conclusion of obviousness with respect to the claim in question. Claim 21 is rejected as applied to claim 1 as pertaining to a corresponding apparatus. Claim 22 is rejected as applied to claim 1 as pertaining to a corresponding non-transitory computer-readable storage medium. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Choi et al. (US 2021/0042558) discusses generating, by using one or more neural networks, a composite image by combining the input image with the obtained target object image. Guows et al. (US 2020/0342643) discusses semantically-consistent image style transfer with a neural network that generates an output target domain image that is from the target domain but that has similar semantics to the input source domain image. Alvarez Lopez et al. (US 12,406,023) discusses complex targets that are semantically consistent with a label space of teacher's dataset. Li et al. (US 2021/0287430) discusses applying the input sets to an autoencoder; training a generative adversarial network (GAN) on an output of the autoencoder; comparing outputs of the autoencoder and the GAN. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNY A CESE whose telephone number is (571) 270-1896. The examiner can normally be reached on Monday – Friday, 9am – 4pm. If attempts to reach the primary examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached on (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Kenny A Cese/ Primary Examiner, Art Unit 2663
Read full office action

Prosecution Timeline

Apr 17, 2024
Application Filed
Jan 31, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602794
METHOD AND UNIFIED FRAMEWORK SYSTEM FOR FULL-STACK AUTONOMOUS DRIVING PLANNING
2y 5m to grant Granted Apr 14, 2026
Patent 12591980
GROUND PLANE FILTERING OF VIDEO EVENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12573049
POINT CLOUD SEGMENTATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12566947
IMAGE PROCESSING SYSTEM AND MEDICAL INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561756
SUPER-RESOLUTION IMAGE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
86%
With Interview (+10.3%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 687 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month