Prosecution Insights
Last updated: April 19, 2026
Application No. 18/518,609

IMAGE ANALYSIS METHOD AND IMAGE ANALYSIS SYSTEM

Final Rejection §101§102§103
Filed
Nov 24, 2023
Examiner
LI, RUIPING
Art Unit
2676
Tech Center
2600 — Communications
Assignee
MediaTek Inc.
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
95%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
722 granted / 933 resolved
+15.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
40 currently pending
Career history
973
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
25.9%
-14.1% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 933 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This is in response to the applicant response filed on 02/12/2026. In the applicant’s response, claims 1-13, and 17 were amended. Accordingly, claims 1 -20 are pending and being examined. Claims 1 and 11 are independent form. Claim Rejections - 35 USC § 101 3. The claim rejections under 35 USC § 101 make in the previous office action mailed on 11/14/2025, are STILL MAINTAINED because the claimed inventions are directed to non-statutory subject matter (an abstract ideal without significantly more). For example, although independent claim 1 has been amended, the added additional element of “an operation processor” is recited at high level of generality and amounts to no more than mere instruction to apply the exception using a generic processor. Therefore, the claim as a whole does not integrate the judicial exception into a practical application and is not patent eligible. Likewise, independent claim 11 is analogous to claim 1 and is not patent eligible. Their respective dependent claims are patent ineligible as well as explained in the previous office action. Claim Rejections - 35 USC § 102 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 5. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. 6. Claims 1-8, 10-18, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chen et al (“Unsupervised Domain Adaptation for Semantic Segmentation via Low-level Edge Information Transfer”, 2021, hereinafter “Chen”). Regarding claim 1, Chen discloses an image analysis method applied to edge learning and semantic segmentation based domain adaptation and further applied to an image analysis system having an operation processor (the semantic-edge domain adaptation (SEDA) method for image segmentation; see Abstract, fig.2, and Sec. 1, para.4), the image analysis method comprising: the operation processor acquiring at least one input image respectively from a source domain and a target domain (see the acquired “source image Xs”, “source label Ys”, and “target image Xt” in fig.2); the operation processor utilizing a shared domain invariant encoder to analyze the input image to generate an edge feature and a semantic segmentation feature of the input image (see Sec. 3.2, para.2: “the [edge stream] 𝐺𝑒𝑔 takes the output of the first convolutional layer of [semantic stream] 𝐺𝑠𝑒𝑚 as input and aims to yield precise semantic boundary maps B𝑠 and B𝑡. To this end, 𝐺𝑒𝑔 is first trained by minimizing binary cross-entropy loss over the source domain. Ground truth of semantic boundaries can be directly generated from source semantic labels.” In other words, the edge stream 𝐺𝑒𝑔 generates the invariant semantic boundary maps B𝑠 and B𝑡 (i.e., the “source edge map” and the “target edge map” in fig.2) based the input images, i.e., the source image Xs, the source label Ys, and the target image Xt. in fig.2); and the operation processor utilizing a correlation module to analyze the edge feature and the semantic segmentation feature to generate a final semantic segmentation loss and a final edge loss relevant to the input image (minimizing the edge consistency loss L e g c o n , defined by Eq(6), based on the semantic boundary map B𝑡, to generate the final semantic boundary map P𝑡; see Eq(6) and corresponding paragraphs). Regarding claim 2, 12, Chen discloses, further comprising: transmitting the input image of a source domain and the input image of a target domain to a shared domain invariant encoder to acquire a shared latent embedding feature; and transforming the shared latent embedding feature into the edge feature and a semantic segmentation feature via different task specific branches (ibid., wherein the boundary maps B𝑠 and B𝑡 are semantic and domain-invariant segmentation edge features output by the edge stream encoder 𝐺𝑒𝑔 for the source and target images, respectively. It should be noticed that the boundary representations B𝑠 and B𝑡 have “a smaller inter-domain gap in comparison with high-level semantic feature and can be share in different domain, see Sec.1, para.3). Regarding claim 3, 13, Chen discloses, further comprising: transforming the shared latent embedding feature into initial edge prediction by one of the task specific branches and then acquiring initial edge loss based on the initial edge prediction (whether the edge stream encoder 𝐺𝑒𝑔 is first trained by the edge based adversarial loss L e g a d v   defined by Eq(5) to obtain the initial edge feature Hs and the initial edge feature Ht; see Eq(5) and Sec.3.2); utilizing the edge feature to generate final edge loss relevant to the input image; and generating edge loss via the initial edge loss and the final edge loss to feedback to the foresaid task specific branch (whether the edge stream encoder 𝐺𝑒𝑔 is further trained by the edge consistency based adversarial loss L e g c o n   defined by Eq(6) using the edge map B𝑡 to generate the final semantic the boundary/edge map P𝑡; see Eq(6) and Sec.3.2). Regarding claim 4, 14, Chen discloses, further comprising: encrypting the shared latent embedding feature via an encoder of the foresaid task specific branch for generating the edge feature (wherein the boundary representations B𝑠 and B𝑡 output by the edge stream encoder 𝐺𝑒𝑔 are intermediate ‘latent’ boundary values which are only used by the edge consistency loss L e g c o n   defined by Eq(6), the boundary representations B𝑠 and B𝑡 therefore are encrypted by the edge stream encoder 𝐺𝑒𝑔); and decoding the edge feature into the initial edge prediction via a decoder of the foresaid task specific branch in accordance with an original domain of the input image (whether the edge stream encoder 𝐺𝑒𝑔 is first trained by the edge based adversarial loss L e g a d v   defined by Eq(5) including the edge decoder Deg to obtain the initial edge feature Hs and the initial edge feature Ht; see Eq(5) and Sec.3.2). Regarding claim 5, 15, Chen discloses, further comprising: transforming the shared latent embedding feature into initial semantic segmentation prediction by one of the task specific branches and then acquiring initial semantic segmentation loss based on the initial semantic segmentation prediction (“[the semantic stream encoder] 𝐺𝑠𝑒𝑚 takes a target image as input and output the [initial] semantic prediction map 𝑃s [by minimizing Eq(1)]. Then, the [initial] weighted self-information map 𝐼s is calculated” by Eq(2); see Eqs.(1)-(2) and Sec.3.1); and generating semantic segmentation loss via the initial semantic segmentation loss and the final semantic segmentation loss to feedback to the foresaid task specific branch (wherein the semantic stream encoder 𝐺𝑠𝑒𝑚 is further trained by minimizing the entropy reweighting adversarial loss L s e m a d v   defined by Eq(4) to obtain the final semantic prediction map Pt using Eqs.(1)-(4) since It and Pt have a relationship defined by Eq(2); see Sec.3.1). Regarding claim 6, 16, Chen discloses, further comprising: encrypting the shared latent embedding feature via an encoder of the foresaid task specific branch for generating the semantic segmentation feature (wherein the boundary representations B𝑠 and B𝑡 output by the edge stream encoder 𝐺𝑒𝑔 are intermediate ‘latent’ boundary values obtained by minimizing Eq(6) and feed back to the semantic stream encoder 𝐺𝑠𝑒𝑚, the boundary representations B𝑠 and B𝑡 therefore are encrypted by the edge stream encoder 𝐺𝑒𝑔); and decoding the semantic segmentation feature into the initial semantic segmentation prediction via a decoder of the foresaid task specific branch in accordance with an original domain of the input image (wherein the semantic stream encoder 𝐺𝑠𝑒𝑚 takes a target image as input and output the initial semantic prediction map 𝑃s by minimizing Eq(1) based on the semantic label Ys. Then, the initial weighted self-information map 𝐼𝑡 is calculated” by minimizing the entropy reweighting adversarial loss L s e m a d v   defined by Eq(4) including the semantic decoder Dsem ; see Eqs.(1)-(4) and Sec.3.1). Regarding claim 7, 17, Chen discloses, further comprising: utilizing a correlation module and at least one decoder to transform the edge feature and the semantic segmentation feature respectively to the final edge loss and the final semantic segmentation loss (using the edge decoder Deg and minimizing the edge based adversarial loss L e g a d v   defined by Eq(5) to obtain the initial edge feature Ht, and then generating the final edge Bt by minimizing the edge consistency loss L e g c o n   defined by Eq(6); minimizing Eq(1) to output the initial semantic prediction map 𝑃s, and using the semantic decoder Dsem to generate the final semantic prediction map Is, i.e., the final semantic prediction map Ps, by minimizing the entropy reweighting adversarial loss L s e m a d v   defined by Eq(4); see fig.2 see Sec.3.). Regarding claim 8, 18, Chen discloses, further comprising: utilizing a convolution function to compute task specific intermediate embedding features of the edge feature and the semantic segmentation feature, for acquiring a modular edge feature corresponding to a final edge output prediction and a modular semantic segmentation feature corresponding to a final semantic segmentation output prediction (see sec.3.2, para.2: “A gated convolutional layer is introduced in 𝐺𝑒𝑔 to ensure that 𝐺𝑒𝑔 only processes edge-relevant information. Specifically, 𝐺𝑒𝑔 takes the output of the first convolutional layer of 𝐺𝑠𝑒𝑚 as input and aims to yield precise semantic boundary maps B𝑠 and B𝑡.”). Regarding claim 10, 20, Chen discloses, further comprising: transforming the modular edge feature into the final edge output prediction for generating the final semantic segmentation loss by an edge decoder (using the edge decoder Deg and minimizing the edge based adversarial loss L e g a d v   defined by Eq(5) to obtain the initial edge feature Ht, and then generating the final edge Bt by minimizing the edge consistency loss L e g c o n   defined by Eq(6); see sec. 3.2); and transforming the modular semantic segmentation feature into the final semantic segmentation output prediction for generating the final edge loss by a semantic segmentation decoder (minimizing Eq(1) to output the initial semantic prediction map 𝑃s, and using the semantic decoder Dsem to generate the final semantic prediction map Is, i.e., the final semantic prediction map Ps, by minimizing the entropy reweighting adversarial loss L s e m a d v   defined by Eq(4); see Sec. 3.2). Regarding claim 11, claim 11 is an inherent variation of claim 1, thus it is interpreted and rejected for the reasons set forth in the rejection of claim 1. Claim Rejections - 35 USC § 103 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chen in view of Benjdira et al (“Unsupervised Domain Adaptation Using Generative Adversarial Networks for Semantic Segmentation of Aerial Images”, 2019, hereinafter “Benjdira”). Regarding claim 9, 19, Chen discloses, further comprising: utilizing L e g s e g , L e g a d v , and L e g c o n ; see Eq(8)) and Sec.3.3). Chen does not explicitly disclose “utilizing a sigmoid function” as recited by the claim. However, a neuron node with a sigmoid activation function is well known and widely used in the field of machine learning. As evidence, in the same field of endeavor, Benjdira teaches an encoder–decoder convolutional neural network, which utilizes a neuron with a sigmoid activation function in the last layer to convert this feature vector into a binary output, for semantic segmentation of image. See, “the encoder–decoder architecture of the generator Fig.4”, and Sec. 4.1. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was made to incorporate the teachings of Benjdira into the teachings of Chen and utilize a sigmoid function to convert a feature vector from a domain to another domain. Suggestion or motivation for doing so would have been to perform unsupervised domain adaptation using generative adversarial networks (GAN) for semantic segmentation of images as taught by Benjdira, see Title and Abstract. Therefore, the claims are unpatentable over Chen in view of Benjdira. Response to Arguments 9. Applicant’s arguments, filed on 02/12/2026, have been fully considered but they are not persuasive. 9-1. On page 8-9 of applicant’s response, regarding the rejected claims under 35 USC 101, the applicant argues that claims 1-20 recite “limitations amount to an improve to anther technology or technical field, and therefore claims 1-20 amount to significantly more than an abstract idea”, but the applicant does not provide evidence about what particular technology or technical field has been improved. Rather, the claimed invention, such as claim 1, merely adds a generic “operation processor to perform an abstract ideal. Therefore, the claim as a whole does not integrate the judicial exception into a practical application and is not patent eligible. 9-2. On page 10 of applicant’s response, regarding the claim rejections under 35 USC 102, the applicant argues: Chen does not disclose any encoder capable of generating a[n] edge map/feature and a semantic segmentation map/feature via analysis of images from the target image and the source image, and therefore fails to teach the inventive characteristic of "the operation processor utilizing a shared domain invariant encoder to analyze the input image to generate an edge feature and a semantic segmentation feature of the input image" of the present application. The examiner respectfully disagrees with the applicant’s arguments for at least the following reasons. As explained in the rejections of the claims, Chen discloses “yield[ing] precise semantic boundary maps B𝑠 and B𝑡” from the inputs--the source image Xs, the source label Ys, and the target image Xt. These “precise semantic boundary maps B𝑠 and B𝑡” are “domain invariant representations”. See Sec. 1, item (2), in the last Paragraph. The argument therefore is unpersuasive. Conclusion 10. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUIPING LI whose telephone number is (571)270-3376. The examiner can normally be reached 8:30am--5:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on (571)272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit https://patentcenter.uspto.gov; https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center, and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RUIPING LI/Primary Examiner, Ph.D., Art Unit 2676
Read full office action

Prosecution Timeline

Nov 24, 2023
Application Filed
Nov 12, 2025
Non-Final Rejection — §101, §102, §103
Feb 12, 2026
Response Filed
Mar 04, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602754
DYNAMIC IMAGING AND MOTION ARTIFACT REDUCTION THROUGH DEEP LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12597183
METHOD AND APPARATUS FOR PERFORMING PRIVACY MASKING BY REFLECTING CHARACTERISTIC INFORMATION OF OBJECTS
2y 5m to grant Granted Apr 07, 2026
Patent 12597289
IMAGE ACCUMULATION APPARATUS, METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586408
METHOD AND APPARATUS FOR CANCELLING ANONYMIZATION FOR AN AREA INCLUDING A TARGET
2y 5m to grant Granted Mar 24, 2026
Patent 12573239
SYSTEM AND METHOD FOR LIVENESS VERIFICATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
95%
With Interview (+18.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 933 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month