Prosecution Insights
Last updated: April 19, 2026
Application No. 18/509,243

REGION EXTRACTION MODEL CREATION SUPPORT APPARATUS, METHOD FOR OPERATING REGION EXTRACTION MODEL CREATION SUPPORT APPARATUS, AND PROGRAM FOR OPERATING REGION EXTRACTION MODEL CREATION SUPPORT APPARATUS

Final Rejection §103
Filed
Nov 14, 2023
Examiner
HOANG, HAN DINH
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
120 granted / 162 resolved
+12.1% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
187
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.7%
+25.7% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 162 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 12/04/2025 have been fully considered but they are not persuasive. The applicant argues on page 4 of the remarks filed that the cited prior art of Fang et al. US PG-Pub (US 20220301156 A1) does not explicitly teach direct the region extraction model to output a final feature amount map having element values related to probabilities of being the regions of the classes; perform a sharpening process on the final feature amount map or a probability distribution map that has been generated on the basis of the final feature amount map and that shows the probability for each class. The Examiner respectfully disagrees as Fang discloses direct the region extraction model to output a final feature amount map having element values related to probabilities of being the regions of the classes as disclosed in ¶[0058] the prior art discloses outputting a feature map based on inputting images into the machine learning model and ¶[0098] discloses during segmentation generating a probability related to regions of the image for classification. The claim language under broadest reasonable interpretation just requires generating a feature map and from the map determining a probability of the regions in which Fang clearly discloses in the cited paragraphs. Fang also discloses perform a sharpening process on the final feature amount map or a probability distribution map that has been generated on the basis of the final feature amount map and that shows the probability for each class as disclosed in ¶[0098] the prior art mentions predicting an error when classifying the image and ¶[0101] discloses retraining the network based on the error calculated when generating the feature map and probability map. The claim language under broadest reasonable interpretation just requires performing a sharpening process based on the feature map or probability map generated. The sharpening process could be characterized as a retraining of the model. The Examiner suggests to the applicant perhaps by further clarifying what is happening in the sharpening process such as specific actions could overcome the cited prior art. The applicant argues on page 8 of the remarks filed that the cited prior art does not disclose the limitations of claim 4. However, Arani teaches wherein the sharpening process is a process of applying a softmax function with temperature having a temperature parameter equal to or less than 1 to the final feature amount map or the probability distribution map in ¶[0021] ““where σ is the softmax function, z.sub.e are the output logits and T is the temperature which is usually set to 1”). Thus, the applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2 and 7-12 are rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. US PG-Pub(US 20220301156 A1) in view of Zhu et al. ("SEGMENTATION WITH RESIDUAL ATTENTION U-NET AND AN EDGE-ENHANCEMENT APPROACH PRESERVES CELL SHAPE FEATURES"). Regarding Claim 1, Fang teaches a region extraction model creation support apparatus that supports creation of a region extraction model for extracting regions of a plurality of classes which are in a subject to be recognized in an image and whose boundaries are in contact with each other(¶[0082], “As shown in 9A, main object detection model 904 and error estimator 906 are initially trained using labeled data including the pairs of the original image 902 and its corresponding ground-truth bounding box and classification label 910. In some embodiments, main object detection model 904 is trained to minimize the difference between the predicted and ground-truth bounding boxes and classes. In some embodiments, main object detection model 904 may be implemented by any object detection network, including R-CNN, YOLO, SSD, CenterNet, CornerNet, etc.”), ¶0082], discloses using an object detection model to determine boundaries of objects in the image using a bounding box), the region extraction model creation support apparatus comprising: a processor and a memory that is connected to or provided in the processor(¶[0049], Image analysis device 203 may include a processor and a non-transitory computer-readable medium (discussed in detail in connection with FIG. 3).), wherein the processor is configured to: use, as training data, a learning input image and local annotation data generated by locally giving labels to the regions of the classes in the learning input image (¶[0045], “As shown in FIG. 2, model training device 202 may communicate with training database 201 to receive one or more sets of training data, In some embodiments, training data may include a first subset of labeled data, e.g., labeled images, and a second subset of unlabeled data, e.g., unlabeled images. “Labeled data” is training data that includes ground-truth results obtained through human annotation and/or automated annotation procedures. For example, for an image segmentation task, the labeled data includes pairs of original images and the corresponding ground-truth segmentation masks for those images. As another example, for an image classification task, the labeled data includes pairs of original images and the corresponding ground-truth class labels for those ages. “Unlabeled data,” on the other hand, is training data that does not include the ground-truth results.”, ¶[0045], discloses the training data used is a labeled data which pertains to a region of interest in the lung and unlabeled data of the lung.direct the region extraction model to output a final feature amount map having element values related to probabilities of being the regions of the classes; ([0058] discloses outputting a feature map based on the training images input into the model and ¶[0098] “In steps S1304 and S1306, the image analysis task may be any predetermined task to analyze or otherwise process the medical image. In some embodiments, the image analysis task is an image segmentation task, and the learning model is designed to predict a segmentation mask of the medical image, e.g., a segmentation mask for a lesion in the lung region. The segmentation mask can be a probability map. For example, the segmentation learning model and error estimator can be trained using workflow 1100/1150 of FIG. 11A-11B and method 1200 of FIG. 12. In some embodiments, the image analysis task is an image classification task, the learning model is designed to predict a classification label of the medical image. For example, the classification label may be a binary label to indicate whether the medical image contains a tumor, or a multi-class label that indicate what type of tumor the medical image contains”, discloses determining a probability map of the input image to determine a classification for the medical image.)perform a sharpening process on the final feature amount map or a probability distribution map that has been generated on the basis of the final feature amount map and that shows the probability for each class (¶[0098], “the classification learning model and error estimator can be trained using workflow 700/750 of FIG. 7A-7B and method 800 of FIG. 8. In some embodiments, the image analysis task is an object detection task, the learning model is designed to detect an object from the medical image, e.g., by predicting a bounding box surrounding the object and a classification label of the object. For example, coordinates of the bounding box of a lung nodule can be predicted and a class label can be predicted to indicate it is a lung nodule. For example, the object detection learning model and error estimator can be trained using workflow 900/950 of FIG. 9A-9B and method 1000 of FIG. 10.”, ¶[0098] discloses predicting an error when classifying the image and ¶[0101] discloses retraining the network based on the error calculated when generating the feature map and probability map.) Fang does not explicitly teach detect the boundary on the basis of a result of the sharpening process and update the region extraction model in a direction in which a boundary length loss corresponding to a length of the boundary is reduced. Zhu teaches detect the boundary on the basis of a result of the sharpening processand update the region extraction model in a direction in which a boundary length loss corresponding to a length of the boundary is reduced. (Page 3, 3.4 The Edge-Enhancement Approach, Paragraph 1, “Due to the desire to retain the long cellular extensions characteristic to our cell-line of interest, BHK-21, a special mechanism we implemented is a novel loss function approach that we call edge-enhancement (EE). Normally the loss function is defined as the binary cross-entropy loss between the deep-learning-predicted segmentation and the manually-annotated segmentation ground truth after flattening them into vectors (Figure 2a). For edge-enhancement, we deliberately emphasize the weighting on the accurate prediction of the cell edges (Figure 2b). Specifically, we find the Laplacian-of-Gaussian of the ground truth segmentation, take two binary-thresholded versions of the resulting image (intensities greater than 0.001 for one and intensities less than -0.001 for the other) that respectively corresponds to the inner and outer cell boundaries, vectorize the foreground regions as well as their corresponding regions in the deep learning prediction, and append them to their respective segmentation vectors for loss calculation.”, as disclosed in this section of the prior art, an edge enhancement process is performed to determine the boundaries in the image and once the loss is calculated then the network is retrained to lower the loss such that the boundaries of the cell are preserved.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Fang with Zhu in order to calculate a loss and update the boundaries of the object being detected. One skilled in the art would have been motivated to modify Fang in this manner in order to improve shape feature preservation for downstream cell tracking and quantification of changes in cell statistics or features over time. (Zhu, Abstract) Regarding Claim 2, the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 1, where Zhu further teaches wherein the processor is configured to: direct the region extraction model to output learning output data obtained by extracting the regions of the classes in the learning input image (Page 3, 3.5 Post-processing of the Segmented Cells, Paragraph 1, “Once the deep learning cell segmentation is generated, it is then refined by watershed splitting of adjacent touching cells, where the seeds for watershed are extracted using blob detection on the intensity-rescaled difference image between the nuclei image and the cytoplasm image. The cleaned up segmentation result in the form of instantiated cell masks, can be directly provided as input to any suitable downstream tracking algorithm.”calculate a loss of the region extraction model according to a result of comparison between the local annotation data and the learning output data for local parts to which the labels have been given; add up the loss and the boundary length loss to obtain a first total loss(Page 3, 3.4 The Edge-Enhancement Approach, Paragraph 1, “Due to the desire to retain the long cellular extensions characteristic to our cell-line of interest, BHK-21, a special mechanism we implemented is a novel loss function approach that we call edge-enhancement (EE). Normally the loss function is defined as the binary cross-entropy loss between the deep-learning-predicted segmentation and the manually-annotated segmentation ground truth after flattening them into vectors (Figure 2a). For edge-enhancement, we deliberately emphasize the weighting on the accurate prediction of the cell edges (Figure 2b). Specifically, we find the Laplacian-of-Gaussian of the ground truth segmentation, take two binary-thresholded versions of the resulting image (intensities greater than 0.001 for one and intensities less than -0.001 for the other) that respectively corresponds to the inner and outer cell boundaries, vectorize the foreground regions as well as their corresponding regions in the deep learning prediction, and append them to their respective segmentation vectors for loss calculation.”, as disclosed in this section of the prior art, an edge enhancement process is performed to determine the boundaries in the image and a loss is calculated based on the areas extracted.); and update the region extraction model in a direction in which the first total loss is reduced(Page 3, 3.4 The Edge-Enhancement Approach, Paragraph 1 discloses an edge enhancement process is performed to determine the boundaries in the image and once the loss is calculated then the network is retrained to lower the loss such that the boundaries of the cell are preserved.). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Fang with Zhu in order to calculate a loss and update the boundaries of the object being detected. One skilled in the art would have been motivated to modify Fang in this manner in order to improve shape feature preservation for downstream cell tracking and quantification of changes in cell statistics or features over time. (Zhu, Abstract) Regarding Claim 7, the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 1, where Zhu further teaches wherein the boundary length loss is an average value of pixel values of a boundary image generated by detecting the boundary from the result of the sharpening process. (4.2, Evaluation of Cell Segmentation: Shape Feature Preservation, Paragraph 1, “From Figure 3 and Figure 4, it can be seen that our proposed method performed the best in terms of cell shape feature preservation. As a result, the mean intensity values of the three information-providing channels within the segmentation mask given by our proposed method also best resembled those in the manual annotation ground truth masks. It is reasonable to believe that the same will hold true for other information to be extracted from these segmentation results”, as disclosed in this section of the prior art, the pixel data is used and average to determine the boundaries of the cell shape.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Fang with Zhu in order to calculate a loss and update the boundaries of the object being detected. One skilled in the art would have been motivated to modify Fang in this manner in order to improve shape feature preservation for downstream cell tracking and quantification of changes in cell statistics or features over time. (Zhu, Abstract) Regarding Claim 8, the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 1, where Fang further teaches wherein the processor is configured to: receive designation of a region from which the boundary is to be detected in the result of the sharpening process. (¶[0042], “In some embodiments, the acquired images may be sent to an annotation station 301 for annotating at least a subset of the images. In some embodiments, annotation station 301 may be operated by a user to provide human annotation. For example, the user may use keyboard, mouse, or other input interface of annotation station 301 to annotate the images, such as drawing boundary line of an object in the image, or identifying what anatomical structure the object is”, ¶[0042] discloses a user can update the annotation in the image.) Regarding Claim 9, the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 1, where Fang further teaches wherein the image is a medical image. (¶[0041], “In some embodiments, image acquisition device 205 may capture medical images containing at east one anatomical structure or organ, such as a lung or a thorax. For example, each volumetric CT exam may contain 51˜1094 CT slices with a varying slice-thickness from 0.5 mm to 3 mm. The reconstruction matrix may have 512×512 pixels with in-plane pixel spatial resolution from 0.29×0.29 mm.sup.2 to 0.98×0.98 mm.sup.2.”, discloses that the image being acquired is a medical image.) Regarding Claim 10, the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 9, where Fang further teaches wherein the classes include a lung lobe. ([0015] FIG. 1 illustrates three exemplary segmented images of a lung region.) Regarding Claim 11, it is substantially similar to claim 1 respectively, and is rejected in the same manner, the same art, and reasoning applying. Regarding Claim 12, it is substantially similar to claim 1 respectively, and is rejected in the same manner, the same art, and reasoning applying. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. US PG-Pub(US 20220301156 A1) in view of Zhu et al. ("SEGMENTATION WITH RESIDUAL ATTENTION U-NET AND AN EDGE-ENHANCEMENT APPROACH PRESERVES CELL SHAPE FEATURES") in further view of Kim et al. ("Mumford–Shah Loss Functional for Image Segmentation With Deep Learning"). Regarding Claim 3, while the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 2, they do not explicitly teach wherein the processor is configured to: further add a size loss corresponding to sizes of the regions of the plurality of classes to the first total loss to obtain a second total loss, and update the region extraction model in a direction in which the second total loss is reduced. Kim teaches wherein the processor is configured to: further add a size loss corresponding to sizes of the regions of the plurality of classes to the first total loss to obtain a second total loss (Page 1860, 2) In the Absence of Semantic Label, Paragraph 1 discloses determining a loss between pixel values in each region of the image to obtain a total loss.) and update the region extraction model in a direction in which the second total loss is reduced. (Page 1865, VI. Conclusion, Paragraph 1, “The main motivation for the new loss function was the novel observation that the softmax layer output has striking similarity to the characteristic function for Mumford-Shah functional for image segmentation so that the MumfordShah functional can be minimized using a neural network. Thanks to the self-supervised nature of the loss, a neural network could be trained to learn the segmentation of specific regions with or without small labeled data.”, in this section of the prior art, the neural network is updated based on reducing the loss calculated.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Fang and Zhu with Kim in order to determine different losses between the pixel value in each region. One skilled in the art would have been motivated to modify Fang and Zhu in this manner in order to propose a novel loss function based on Mumford-Shah functional that can be used in deep-learning based image segmentation without or with small labeled data. (Kim, Abstract) Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. US PG-Pub(US 20220301156 A1) in view of Zhu et al. ("SEGMENTATION WITH RESIDUAL ATTENTION U-NET AND AN EDGE-ENHANCEMENT APPROACH PRESERVES CELL SHAPE FEATURES") in further view of Arani et al. US PG-Pub(US 20220044116 A1). Regarding Claim 4, while the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 1, they do not explicitly teach wherein the sharpening process is a process of applying a softmax function with temperature having a temperature parameter equal to or less than 1 to the final feature amount map or the probability distribution map. Arani teaches wherein the sharpening process is a process of applying a softmax function with temperature having a temperature parameter equal to or less than 1 to the final feature amount map or the probability distribution map (¶[0021], “where σ is the softmax function, z.sub.e are the output logits and T is the temperature which is usually set to 1. Using a higher τ value produces a softer probability distribution over classes. The tuning parameter α∈[0, 1] controls the relative weightage between the two losses.”, In this section of the prior art, a softmax function has a temperature set to 1.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Fang and Zhu with Arani in order to apply a softmax function with a temperature parameter. One skilled in the art would have been motivated to modify Fang and Zhu in this manner in order to train a computer-implemented deep neural network with a dataset with annotated labels. (Arani, ¶[0002]) Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. US PG-Pub(US 20220301156 A1) in view of Zhu et al. ("SEGMENTATION WITH RESIDUAL ATTENTION U-NET AND AN EDGE-ENHANCEMENT APPROACH PRESERVES CELL SHAPE FEATURES") in further view of Song et al. US PG-Pub(US 20210241015 A1). Regarding Claim 5, while the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 1, wherein the sharpening process is a process of applying a soft argmax function to the final feature amount map or the probability distribution map. (¶[0078] “In this way, the key points of the target are generated according to the region suggestion box and the key point response feature map and in combination with a scale adaptive soft-argmax operation, so that the effect of key point detection is improved, without being limited by the number of key points.”, ¶[0078] discloses using a softarg max function to improve the key point detection in a feature map. ) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Fang and Zhu with Song in order to apply a soft argmax function to a feature map. One skilled in the art would have been motivated to modify Fang and Zhu in this manner in order for quick and accurate target key point detection. (Song, Abstract) Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Fang et al. US PG-Pub(US 20220301156 A1) in view of Zhu et al. ("SEGMENTATION WITH RESIDUAL ATTENTION U-NET AND AN EDGE-ENHANCEMENT APPROACH PRESERVES CELL SHAPE FEATURES") in further view of Xie et al. ("Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT"). Regarding Claim 6, while the combination of Fang and Zhu teach the region extraction model creation support apparatus according to claim 1, they do not explicitly teach wherein the sharpening process is a process of applying a sigmoid function having a gain equal to or greater than 1 to the final feature amount map or the probability distribution map. Xie teaches wherein the sharpening process is a process of applying a sigmoid function having a gain equal to or greater than 1 to the final feature amount map or the probability distribution map. (Page 995, C. KBC Submodel, paragraph 1, “To adapt the ResNet-50 network to our benign-malignant nodule classification problem, we removed its last fully connected layer, and then added three fully connected layers with 2048, 1024 and 2 neurons, respectively. The weights of these three fully connected layers were randomly initialized by using Xaiver algorithm, and the activation function in the last layer was set to the sigmoid function”, in this section of the prior art, a sigmoid function is used to classify the lesions in the image based on a probability.) It would have been obvious to one of ordinary skill in the art before the effective filing date to modify the claimed invention as taught by Fang and Zhu with Xie in order to apply a sigmoid function on the feature map. One skilled in the art would have been motivated to modify Fang and Xie in this manner in order to classify lung nodules with an adaptive weighting scheme learned during the error back propagation. (Xie, Abstract) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAN D HOANG whose telephone number is (571)272-4344. The examiner can normally be reached Monday-Friday 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN M VILLECCO can be reached at 571-272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAN HOANG/Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Nov 14, 2023
Application Filed
Sep 18, 2025
Non-Final Rejection — §103
Dec 04, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602835
POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602778
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12602918
LEARNING DATA GENERATING APPARATUS, LEARNING DATA GENERATING METHOD, AND NON-TRANSITORY RECORDING MEDIUM HAVING LEARNING DATA GENERATING PROGRAM RECORDED THEREON
2y 5m to grant Granted Apr 14, 2026
Patent 12592070
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586364
SINGLE IMAGE CONCEPT ENCODER FOR PERSONALIZATION USING A PRETRAINED DIFFUSION MODEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.3%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month