Prosecution Insights
Last updated: April 19, 2026
Application No. 18/681,973

TYMPANUM IMAGE PROCESSING APPARATUS AND METHOD FOR GENERATING NORMAL TYMPANUM IMAGE BY USING MACHINE LEARNING MODEL TO OTITIS MEDIA TYMPANUM IMAGE

Non-Final OA §103
Filed
Feb 07, 2024
Examiner
TRAN, JENNY NGAN
Art Unit
2615
Tech Center
2600 — Communications
Assignee
UNIVERSITY OF ULSAN FOUNDATION FOR INDUSTRY COOPERATION
OA Round
2 (Non-Final)
20%
Grant Probability
At Risk
2-3
OA Rounds
2y 6m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 20% of cases
20%
Career Allow Rate
1 granted / 5 resolved
-42.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 5 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-4, 6-11 are currently pending in the present application, with claims 1, 10, and 11 being independent. Response to Amendments / Arguments Applicant’s arguments, see Pg. 9, filed 12/12/2025, with respect to the Abstract have been fully considered and are persuasive. The objection of the Abstract has been withdrawn. Applicant’s arguments, see Pg. 9, filed 12/12/2025, with respect to claims 1-11 have been fully considered and are persuasive. The 35 U.S.C. § 112b rejection of claims 1-11 has been withdrawn. Applicant’s arguments, see Pg. 9-12, filed 12/12/2025, with respect to the rejection(s) of claim(s) 1-11 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art. Applicant's arguments, see Pg. 11, filed 12/12/2025 have been fully considered but they are not persuasive. Applicant argues: Douglas (US 20150065803) “does not generate an image in response to a case in which a ratio of region occluded by earwax is less than a threshold ratio compared to a region of the entire tympanum in the target image”, and applicant further asserts that simply mentioning a ratio is not enough to teach or suggest the limitations Examiner replies: Douglas expressly discloses detecting obstruction caused by wax within tympanic-membrane imagery, and extracting image features associated with wax and other outer-ear structures (see Douglas Par. 0039-0045; indicate when the image is obstructed (e.g., by wax, foreign body, etc.) …detecting at least a portion of a tympanic membrane from an image of a subject's ear canal…indicate an occlusion of an ear canal from the image. Par. 0214; color features may be extracted…Color features may include the mean, standard deviation, or other statistics of any color property associated with the TM or other outer ear structures, such as the ear canal, wax, hair, etc.), therefore establishes identification of a “region occluded by earwax”. Douglas further discloses threshold-based determinations regarding tympanic-membrane visibility and image adequacy, including evaluating whether a sufficient portion of the tympanic membrane is present (see Douglas Par. 0036; guiding a subject to take an image may examine images (digital images) of a patient's ear canal being taken by the user, e.g., operating an otoscope to determine when a minimum amount of tympanic membrane is showing (e.g., more than 20%, more than 25%, more than 30%, more than 35%, more than 40%, more than 45%, more than 50%, more than 55%, more than 60%, more than 65%, more than 70%, more than 75%, more than 80%, more than 85%, more than 90%, more than 95%, etc.) in the image. Par. 0329; Thresholds to determine when an obstruction warrants ending the exam can be determined via a standard machine learning method, wherein the training set consists of a set of otoscopic exams and labels, where the labels indicate which exams have dangerous obstructions and which do not), and such threshold-based evaluation necessarily entails “a ratio of a region occluded by earwax is less than a threshold ratio compared to a region of the entire tympanum in the target image”. Under broadest reasonable interpretation, the claimed “ratio of region occluded by earwax is less than a threshold ratio compared to a region of the entire tympanum in the target image” reads on Douglas’s quantitative obstruction/visibility determinations derived from segmentation and threshold analysis (Douglas Par. 0036-0045, Par. 0214, and Par. 0329) Additionally, per MPEP 2111.04 (II), the limitation reciting action “in response to a case”, a stated condition, is a contingent limitation that does not require the prior art to perform the action only when the condition occurs, rather, the limitation is satisfied when the prior art is capable of performing the recited action when the stated condition is met. Douglas’s system detects wax-based obstruction (Par. 0039-0045), evaluates obstruction using quantitative thresholds (Par. 0036, and 0329), and performs image processing, guidance, and diagnostic operations based on those determinations (Par. 0036; indicate that an adequate image has been taken, and/or may automatically start sending, transmitting, and/or analyzing the image(s). Par. 0063; extracting a plurality of image features from a first image of a subject's tympanic membrane, wherein the image features include color and texture data; combining the extracted features into a feature vector for the first image; applying the feature vector to a trained classification model to identify a probability of each of a plurality of different diseases; indicating the probability of each of a plurality different diseases). Accordingly, Douglas is capable of generating an image when the obstruction level satisfies a threshold condition, which meets the claimed contingent “in response to a case” language under MPEP 2111.04 (II). Applicant’s arguments with respect to the second machine learning model are found persuasive, see the following Non-Final Rejection below. Regarding the remaining arguments: Applicant argues with respect to the amended claim language, which is fully addressed in the prior art rejections set forth below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Douglas et al. (US 20150065803), hereinafter referred to as “Douglas”, in view of Siddiquee et al. "Learning fixed points in generative adversarial networks: From image-to-image translation to disease detection and localization." In Proceedings of the IEEE/CVF international conference on computer vision, pp. 191-200. 2019, hereinafter referred to as “Siddiquee”. Regarding claim 1, Douglas discloses an apparatus for processing a tympanum image (Abstract; methods and apparatuses for assisting the acquisition and analysis of images of the tympanic membrane), the apparatus comprising: a processor configured to extract, from a tympanum image (Par. 0031; processor to: receive an image of the subject's ear canal…extract, at a plurality of different scales, a set of feature values for each of the subregions…identify, on a representation of the image, a tympanic membrane region of the image. Par. 0171; feature extraction consists of extracting a machine-readable set of image properties ("features")), a tympanum outline of the tympanum image (Par. 0029; identifying the tympanic membrane region may include any appropriate identification, including visual (e.g., identifying a tympanic membrane region from the image on a representation of the image by circling/outlining, highlighting, coloring, etc.). Par. 0172; TM segmentation map. Par. 0176; image contains an outlined contour of the TM to yield the TM segmentation map) and an earwax region of the tympanum image (Par. 0036; indicate when the image is obstructed (e.g., by wax, foreign body, etc.) …Par. 0214; color features may be extracted…color features may include…any color property associated with the TM or other outer ear structures, such as the ear canal, wax, hair, etc. Par. 0172; segmentation likelihood map. Pat. 0176; pathological TMs (e.g., acute otitis media, otitis media with effusion, foreign bodies, etc.)) using a first machine learning model (Par. 0172-0176; TM segmentation consists of pixel-based machine learning…supervised machine learning and the model…To generate training data, the user generates a training set of images…each image contains an outlined contour of the TM to yield the TM segmentation mask…The image subjects may include healthy TMs, pathological TMs (e.g., acute otitis media, otitis media with effusion, foreign bodies, etc.), or images that do not contain a TM.), obtain a target image for an entire tympanum (Par. 0029; a separate image including just the extracted tympanic membrane may be generated), a tympanum outline of the target image (Par. 0029; In general, identifying the tympanic membrane region may include any appropriate identification, including visual (e.g., identifying a tympanic membrane region from the image on a representation of the image by circling/outlining, highlighting, coloring, etc.)…or setting one or more registers associated with an image to indicate that the image includes a tympanic membrane, or portion of a tympanic membrane (e.g., above a threshold minimum amount of tympanic membrane region), and an earwax region of the target image based on the tympanum outline of the tympanum image (Par. 0039; indicate when the image is obstructed (e.g., by wax, foreign body, etc.) . Par. 0045; indicate an occlusion of an ear canal from the image) (Douglas Par. 0039-0045; indicate when the image is obstructed (e.g., by wax, foreign body, etc.) …detecting at least a portion of a tympanic membrane from an image of a subject's ear canal…indicate an occlusion of an ear canal from the image. Par. 0214; color features may be extracted…Color features may include the mean, standard deviation, or other statistics of any color property associated with the TM or other outer ear structures, such as the ear canal, wax, hair, etc.) is less than a threshold ratio compared to a region of the entire tympanum in the target image (Douglas Par. 0036; guiding a subject to take an image may examine images (digital images) of a patient's ear canal being taken by the user, e.g., operating an otoscope to determine when a minimum amount of tympanic membrane is showing (e.g., more than 20%, more than 25%, more than 30%, more than 35%, more than 40%, more than 45%, more than 50%, more than 55%, more than 60%, more than 65%, more than 70%, more than 75%, more than 80%, more than 85%, more than 90%, more than 95%, etc.) in the image. Par. 0329; Thresholds to determine when an obstruction warrants ending the exam can be determined via a standard machine learning method, wherein the training set consists of a set of otoscopic exams and labels, where the labels indicate which exams have dangerous obstructions and which do not) Examiner’s note: Additionally, as recited in examiner’s response to applicant’s arguments, per MPEP 2111.04 (II), the limitation reciting action “in response to a case”, a stated condition, is a contingent limitation that does not require the prior art to perform the action only when the condition occurs, rather, the limitation is satisfied when the prior art is capable of performing the recited action when the stated condition is met. Douglas’s system detects wax-based obstruction (Par. 0039-0045), evaluates obstruction using quantitative thresholds (Par. 0036, and 0329), and performs image processing, guidance, and diagnostic operations based on those determinations (Par. 0036; indicate that an adequate image has been taken, and/or may automatically start sending, transmitting, and/or analyzing the image(s). Par. 0063; extracting a plurality of image features from a first image of a subject's tympanic membrane, wherein the image features include color and texture data; combining the extracted features into a feature vector for the first image; applying the feature vector to a trained classification model to identify a probability of each of a plurality of different diseases; indicating the probability of each of a plurality different diseases). Accordingly, Douglas is capable of generating an image when the obstruction level satisfies a threshold condition, which meets the claimed contingent “in response to a case” language under MPEP 2111.04 (II). Douglas does not disclose generate a transformed image in which an abnormal region of the target image is changed to a normal region by inputting the target image to a second machine learning model, the abnormal region corresponding to a region having a disease or an abnormality related to the tympanum, and a display configured to display at least one of the transformed image and the target image so that a tympanum region of the transformed image is aligned at a position corresponding to a position of a tympanum region of the target image. In the same art of medical image analysis, Siddiquee discloses generate a transformed image in which an abnormal region of the target image is changed to a normal region by inputting the target image to a second machine learning model, the abnormal region corresponding to a region having a disease or an abnormality related to the tympanum (Pg. 192, Fig. 2 description; translate any image, diseased or healthy, into a healthy image, allowing diseased regions to be revealed by subtracting those two images…Section 1; GAN trained for virtual healing aims to turn any image, with unknown health status, into a healthy one. Pg. 197, Section 4.2, Left Column; The desired GAN behaviour is to translate diseased images to healthy ones while keeping healthy images intact…), and a display configured to display at least one of the transformed image and the target image so that a tympanum region of the transformed image is aligned at a position corresponding to a position of a tympanum region of the target image (Pg. 192, Fig. 2. Pg. 197, Section 4.2, Left Column; having translated the images into the healthy domain, we then detect the presence and location of a lesion in the difference image by subtracting the translated healthy image from the input image… Fig. 4b…ResNet-50-CAM at 32x32 resolution). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Siddiquee’s medical-image transformation model into Douglas’s tympanic-image analysis system. Doing so allows improved analysis and visualization of tympanic abnormalities for users, patients, and clinicians. Douglas already detects tympanic membrane disease and evaluates image adequacy based on wax-occlusion thresholds, therefore, applying a known ML-based image to image translation would predictably allow clinicians to compare diseased regions with normalized anatomy, yielding predictable results in enhancing diagnosis and assessment using well-known medical imaging reconstruction techniques. Regarding claim 3, Douglas in view of Siddiquee discloses the apparatus of claim 1, and further discloses wherein the processor is configured to: determine whether the tympanum image is about an entire tympanum based on the tympanum outline of the tympanum image (Douglas Par. 0029; identifying a tympanic membrane region from the image on a representation of the image by circling/outlining, highlighting, coloring, etc.…. portion of a tympanic membrane (e.g., above a threshold minimum amount of tympanic membrane region. Par. 0036; determine when a minimum amount of tympanic membrane is showing (e.g., more than 20%, more than 25%, more than 30%, more than 35%, more than 40%, more than 45%, more than 50%, more than 55%, more than 60%, more than 65%, more than 70%, more than 75%, more than 80%, more than 85%, more than 90%, more than 95%, etc.) in the image…), and determine the target image based on the tympanum image in response to determining that the tympanum image is about an entire tympanum (Douglas Par. 0037; detecting at least a portion of the tympanic membrane from the image may comprise…estimating, for each individual subregion within the plurality of subregions, a probability that the individual subregion is part of a tympanic membrane based on the extracted sets of feature values for the individual subregion. Par. 0168-0189; TM Detection (Segmentation)). Douglas and Siddiquee are combined for the reason set forth above with respect to claim 1. Regarding claim 4, Douglas in view of Siddiquee discloses the apparatus of claim 1, and further discloses wherein the processor is configured to: determine whether the tympanum image is about an entire tympanum based on the tympanum outline of the tympanum image (Douglas Par. 0029; identifying a tympanic membrane region from the image on a representation of the image by circling/outlining, highlighting, coloring, etc.…. portion of a tympanic membrane (e.g., above a threshold minimum amount of tympanic membrane region. Par. 0036; determine when a minimum amount of tympanic membrane is showing (e.g., more than 20%, more than 25%, more than 30%, more than 35%, more than 40%, more than 45%, more than 50%, more than 55%, more than 60%, more than 65%, more than 70%, more than 75%, more than 80%, more than 85%, more than 90%, more than 95%, etc.) in the image…), obtain an additional tympanum image in response to determining that the tympanum image is about a portion of a tympanum (Douglas Par. 0040-0049; detecting one or more deeper regions in the image…indicating a direction to orient the otoscope based on the detected one or more deeper regions; determining if the image includes a tympanic membrane…indicating when an image of the tympanic membrane has been taken…detecting one or more deeper regions comprises: determining a field of view for the image), extract a tympanum outline of the additional tympanum image and an earwax region of the additional tympanum image from the additional tympanum image (Douglas Par. 0043; extracting a set of feature values from a plurality of subregions from the image, estimating, for each subregion, a probability that the subregion is part of a tympanic membrane based on the extracted sets of feature values; and indicating when an image of the tympanic membrane has been taken) using the first machine learning model (Par. 0047; using a trained model to determine if the one or more regions are deeper regions in an ear canal), update a temporary image by stitching the additional tympanum image to the tympanum image (Douglas Par. 0267-0269; one can capture multiple sections of the anatomical part of interest and then "stitch" them together into a composite image (i.e., a "panorama" or "mosaic”) …otoscopic TM stitching is shown in FIG. 21), determine whether the temporary image is about an entire tympanum based on a tympanum outline of the temporary image (Douglas FIG. 22-23 and Par. 0269; Modified feature detection or feature matching to allow for matches between smooth frames, e.g., those containing a healthy tympanic membrane; and/or Alternate methods of matching frames e.g., via optical flow for videos), and determine the target image based on the temporary image in response to determining that the temporary image is about an entire tympanum (Douglas Par. 0267-0269; composite image. Par. 0037; detecting at least a portion of the tympanic membrane from the image may comprise…estimating, for each individual subregion within the plurality of subregions, a probability that the individual subregion is part of a tympanic membrane based on the extracted sets of feature values for the individual subregion. Par. 0168-0189; TM Detection (Segmentation)). Examiner's note: composite image is used as the input image to perform the steps of determining the target image). Douglas and Siddiquee are combined for the reason set forth above with respect to claim 1. Regarding claim 6, Douglas in view of Siddiquee discloses the apparatus of claim 1, and further discloses wherein the processor is configured to: calculate an objective function value (Douglas Par. 0175-0177; 0 to 1) between a temporary output image, which is generated by applying the second machine learning model (Douglas Par. 0176; supervised machine learning) to a training abnormal tympanum image and a ground truth tympanum image (Douglas Par. 0175-; To generate training data, the user generates a training set of images, which may be frames from one or more videos, where each image contains an outlined contour of the TM to yield the TM segmentation mask…The image subjects may include healthy TMs, pathological TMs (e.g., acute otitis media, otitis media with effusion, foreign bodies, etc.), … TM segmentation maps may be provided as a "gold standard" output for the images.), and repeatedly update a parameter of the second machine learning model so that the calculated objective function value converges (Douglas Par. 0175; further training/updating may be performed…extracted features may be fed to a machine learning method for classification. Par. 0185; The features may then be applied to an implementation of a pre-trained supervised machine learning classification model (e.g., a support vector machine or random forest model) included as part of the apparatus to predict whether the given pixel is part of the tympanic membrane or not. The output may be a "probability image," which consists of a probability from 0 to 1 that each processed pixel is part of the tympanic membrane.) Douglas and Siddiquee are combined for the reason set forth above with respect to claim 1. Regarding claim 7, Douglas in view of Siddiquee discloses the apparatus of claim 1, and further discloses wherein the processor is configured to repeatedly update a parameter of the first machine learning model so that an objective function value between temporary output data comprising a tympanum outline (Douglas Par. 0176; outlined contour of the TM) and an earwax region (Douglas color feature associated with wax in Par. 0214) extracted using the first machine learning model from a training tympanum image and ground truth data converges (Douglas Par. 0175; further training/updating may be performed…extracted features may be fed to a machine learning method for classification. Par. 0185; The features may then be applied to an implementation of a pre-trained supervised machine learning classification model (e.g., a support vector machine or random forest model) included as part of the apparatus to predict whether the given pixel is part of the tympanic membrane or not. The output may be a "probability image," which consists of a probability from 0 to 1 that each processed pixel is part of the tympanic membrane). Douglas and Siddiquee are combined for the reason set forth above with respect to claim 1. Regarding claim 8, Douglas in view of Siddiquee discloses the apparatus of claim 1, and further discloses wherein the processor is configured to provide an earwax removal guide (Douglas Par. 0039; additional guide by providing one or more directions, including directions on a display screen showing the images, audible cues, textual cues, or the like…when the image is obstructed (e.g., by wax, foreign body, etc. Par. 0362; use can rotate and move the mobile device… simulated pathologies…such as removal or wax)) in response to a case in which the ratio of the region occluded by earwax (Douglas Par. 0039-0045; indicate when the image is obstructed (e.g., by wax, foreign body, etc.) …detecting at least a portion of a tympanic membrane from an image of a subject's ear canal…indicate an occlusion of an ear canal from the image. Par. 0214; color features may be extracted…Color features may include the mean, standard deviation, or other statistics of any color property associated with the TM or other outer ear structures, such as the ear canal, wax, hair, etc.) is greater than or equal to the threshold ratio compared to the region of the entire tympanum in the target image (Douglas Par. 0036; guiding a subject to take an image may examine images (digital images) of a patient's ear canal being taken by the user, e.g., operating an otoscope to determine when a minimum amount of tympanic membrane is showing (e.g., more than 20%, more than 25%, more than 30%, more than 35%, more than 40%, more than 45%, more than 50%, more than 55%, more than 60%, more than 65%, more than 70%, more than 75%, more than 80%, more than 85%, more than 90%, more than 95%, etc.) in the image. Par. 0329; Thresholds to determine when an obstruction warrants ending the exam can be determined via a standard machine learning method, wherein the training set consists of a set of otoscopic exams and labels, where the labels indicate which exams have dangerous obstructions and which do not) and the display is configured to display the target image and the earwax removal guide (Douglas Par. 0297-0329; guidance system…images (or a subset of images) being received and/or displayed, to identify a TM…indication is displayed to the user). Douglas and Siddiquee are combined for the reason set forth above with respect to claim 1. Regarding claim 9, Douglas in view of Siddiquee discloses the apparatus of claim 1, and further discloses wherein the processor is configured to select one tympanum image (Douglas Par. 0056; the subject/user may be allowed to select one of the plurality of similar tympanic membrane images) from among a plurality of normal tympanum images (Douglas Par. 0011; provide similar images (including time course images) from a database of such images, particularly where the database images are associated with related images and/or diagnosis/prognosis information.), based on at least one of age, gender, and race of a user (Douglas Par. 0011; database of similar images (for which one or more clinical identifiers may be associated. Par. 0221-0224; clinical information may include, but is not limited to: Age, Race, Sex…clinical information falls into one of two possible categories: numerical or categorical, which can be treated differently when converting them to features that are suitable for machine learning…) in response to a case in which the ratio of the region occluded by earwax (Douglas Par. 0039-0045; indicate when the image is obstructed (e.g., by wax, foreign body, etc.) …detecting at least a portion of a tympanic membrane from an image of a subject's ear canal…indicate an occlusion of an ear canal from the image. Par. 0214; color features may be extracted…Color features may include the mean, standard deviation, or other statistics of any color property associated with the TM or other outer ear structures, such as the ear canal, wax, hair, etc.) is greater than or equal to the threshold ratio compared to the region of the entire tympanum in the target image (Douglas Par. 0036; guiding a subject to take an image may examine images (digital images) of a patient's ear canal being taken by the user, e.g., operating an otoscope to determine when a minimum amount of tympanic membrane is showing (e.g., more than 20%, more than 25%, more than 30%, more than 35%, more than 40%, more than 45%, more than 50%, more than 55%, more than 60%, more than 65%, more than 70%, more than 75%, more than 80%, more than 85%, more than 90%, more than 95%, etc.) in the image. Par. 0329; Thresholds to determine when an obstruction warrants ending the exam can be determined via a standard machine learning method, wherein the training set consists of a set of otoscopic exams and labels, where the labels indicate which exams have dangerous obstructions and which do not), and the display is configured to display the one tympanum image and the target image by aligning a tympanum region of the one tympanum image at a position corresponding to the position of the tympanum region of the target image (Douglas Par. 0053; displaying an image of a tympanic membrane may include: extracting a plurality of image features from a first image of a subject's tympanic membrane, wherein the image features include color and texture data; combining the extracted features into a feature vector for the first image; identifying a plurality of similar tympanic membrane images from a database of tympanic membrane images by comparing the feature vector for the first image to feature vectors for images in the database of tympanic membrane images; displaying (e.g., concurrently) the first image and the plurality of similar tympanic membrane images and indicating the similarity of each of the similar tympanic membrane images to the first image). Douglas and Siddiquee are combined for the reason set forth above with respect to claim 1. Regarding claim 10, claim 10 is the method claim of apparatus claim 1, and is accordingly rejected using substantially similar rationale as to that which is set for with respect to claim 1. Regarding claim 11, claim 11 is the CRM claim (Douglas Par. 0167; …non-transitory computer-readable storage medium as a set of instructions capable of being executed by a processor…) of apparatus claim 1, and is accordingly rejected using substantially similar rationale as to that which is set for with respect to claim 1. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNY NGAN TRAN whose telephone number is (571)272-6888. The examiner can normally be reached Mon-Thurs 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNY N TRAN/Examiner, Art Unit 2615 /ALICIA M HARRINGTON/Supervisory Patent Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Feb 07, 2024
Application Filed
Oct 03, 2025
Non-Final Rejection — §103
Dec 12, 2025
Response Filed
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499589
SYSTEMS AND METHODS FOR IMAGE GENERATION VIA DIFFUSION
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
20%
Grant Probability
70%
With Interview (+50.0%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 5 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month