Prosecution Insights
Last updated: April 19, 2026
Application No. 18/603,404

MEDICAL IMAGE AND TEXT PROCESSING METHOD AND APPARATUS

Non-Final OA §102§112
Filed
Mar 13, 2024
Examiner
HUYNH, VAN D
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Canon Medical Systems Corporation
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
630 granted / 721 resolved
+25.4% vs TC avg
Moderate +13% lift
Without
With
+13.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
746
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
30.9%
-9.1% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 721 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 15 is objected to because of the following informalities: Claim 15, line 3 is missing a period (.). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 1 recites the limitation "the image" in line 4. There is insufficient antecedent basis for this limitation in the claim. Examiner suggests replacing “the image” with –the image data--. Claim 20 recites the limitation "the image" in line 3. There is insufficient antecedent basis for this limitation in the claim. Examiner suggests replacing “the image” with –the image data--. Claims 2-19 are also rejected based on their dependency of the defected parent claim 1 above. Regarding claims 4-5, 11, 13, and 19, the phrase "for example" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fink et al., US 2017/0262583. Regarding claim 1, Fink discloses an apparatus (fig. 2, element 212; para 0016; a computer system), comprising: processing circuitry (fig. 2, element 216; para 0016; a processing unit) configured to: obtain image data representing one or more medical images of a region of interest and processing said image data to identify one or more abnormal portions of the image by applying a first pre-determined model (fig. 1, element 102; para 0011; server(s) 102 (i.e., first model) may use NLP rules and dictionaries 108) to the obtained image data (fig. 4, elements 406-408; para 0031; an image of a subject's medical condition may be received (act 406) and may be analyzed, using conventional image analysis techniques, to extract image characteristics (act 408)); obtain medical text data corresponding to the one or more medical images of the region of interest and process said medical text data to identify one or more entities and their associated attributes by applying at least one further pre-determined model (fig. 1, element 102; para 0011; server(s) 102 (i.e., second model) may use NLP rules and dictionaries 108) to the obtained medical text data (fig. 4, elements 402-404; para 0030; server(s) 102 inputting textual input 112 (act 402). Textual input 112 may include unstructured textual input including, but not limited to, doctors' notes, subject's notes, social media messages, email and text messages. Server(s) 102 may use NLP rules and dictionaries 108 to extract medical condition descriptions from textual input 112 (act 404)); and perform a matching process between the one or more identified abnormal portions and the one or more identified entities to obtain matched data comprising groupings of at least one identified abnormal portions and at least one entity (fig. 4, elements 410-414; para 0031; server(s) 102 may correlate (i.e., matching) the extracted medical condition descriptions with the image characteristics (act 410) to produce a patient or subject signature (act 412). The subject signature may be compared with each of a number of reference signatures to determine one or more closest matching reference signatures (act 414). A match score may be computed. The one or more closest matching reference signatures may be determined by the match score, which may be based on computing a distance of a feature vector of the subject signature from a corresponding feature vector of each of the reference signatures. The one or more closest matching reference signatures have a minimum distance with respect to the subject signature), wherein the matching process is based on at least one or more properties of the identified abnormal portions of the image data and at least one or more of the attributes associated with the identified entities (fig. 4, elements 410-412; para 0026-0027 and 0031; server(s) 102 may correlate the extracted medical condition descriptions with the image characteristics (act 410) to produce a patient or subject signature (act 412); An example signature could be a description of a cancerous mole on the skin collocated with an image of the ailment. “Irregular shaped”, “darkened skin”, “surrounded by slightly reddish irritation area” could all appear in the text surrounding the image, along with text unrelated to the image). Regarding claim 2, the apparatus of claim 1, Fink further discloses wherein the matching process comprises generating at least one representation of the identified abnormal portions and at least one further representation of the identified entities and obtaining a score representative of a degree of match between the first and second representations and matching the abnormal portions to at least one of the identified entities based on the obtained score (para 0031). Regarding claim 3, the apparatus of claim 1, Fink further discloses wherein the processing circuitry is further configured to store the matched data and/or use the matched data to train at least one further model and/or generate training data for at least one model using said matched data (fig. 3, element 312; para 0026). Regarding claim 4, the apparatus of claim 1, Fink further discloses wherein identifying the one or more abnormal portions comprises identifying portions in comparison to a pre-determined or learned distribution for healthy and/or normal data based on a difference between a spatial distribution of the medical image and a pre-determined normal distribution, for example, at a pixel or voxel level and/or as represented by a heatmap (fig. 4, element 414; para 0031). Regarding claim 5, the apparatus of claim 1, Fink further discloses wherein the identification of the one or more abnormal regions used a pixel or voxel level approach, for example, thresholding, morphology or connected components based approach (para 0028 and 0031). Regarding claim 6, the apparatus of claim 1, Fink further discloses wherein obtaining the entity and their attributes comprises applying at least one first model to the text data to identify said entities and applying at least a second model to obtain the attributes associated with the entities (para 0022 and 0030). Regarding claim 7, the apparatus of claim 1, Fink further discloses wherein the matching process comprises determining a degree of match based on similarity and/or a consistency between properties of the one or more abnormal portions and attributes of the one or more entities (para 0026-0027 and 0031). Regarding claim 8, the apparatus of claim 1, Fink further discloses wherein the matching process comprises determining a similarity function or other measure of distance between mathematical representations of the one or more properties of the abnormal image portions and the attributes of the identified entities and their attributes (para 0026-0027 and 0031-0032). Regarding claim 9, the apparatus of claim 1, Fink further discloses wherein the matching process is based on a pre-determined relationships between the one or more properties and the one or more attributes (para 0026-0027 and 0031). Regarding claim 10, the apparatus of claim 1, Fink further discloses wherein the matching process is based on minimizing or otherwise optimizing a matching function, the matching function comprising a term representing similarity between the identified abnormal regions and/or a term penalizing a variation of abnormal image portions assigned to the same class of entities (para 0031). Regarding claim 11, the apparatus of claim 1, Fink further discloses wherein the matching process comprises performing an optimization process, for example, the Jonker-Volegenant algorithm applied to solve as multiple linear assignment problems (para 0031). Regarding claim 12, the apparatus of claim 1, Fink further discloses wherein the processing resource is further configured to retrain and/or refine the first and/or the second model using the obtained matched data (fig. 3; para 0022-0029). Regarding claim 13, the apparatus of claim 1, wherein the processing resource is further configured to display the matched data, for example, the groupings of one or more abnormal portions and entities to a user and obtaining further user input representing a user evaluation of the matched data as part of a further training process or as part of generation of training data (fig. 4, element 416; para 0033-0034). Regarding claim 14, the apparatus of claim 1, Fink further discloses wherein the first and/or the second model comprises a deep learning or other artificial neural network based model (para 0026). Regarding claim 15, the apparatus of claim 1, Fink further discloses applying a principle component analysis or other feature reduction procedure to a larger set of features, and wherein at least part of the matching process is applied to the reduced set of features (fig. 4, element 410; para 0031). Regarding claim 16, the apparatus of claim 1, Fink further discloses wherein the medical image data comprises 1D, 2D, 3D, or 4D data, and/or wherein the medical image data (fig. 1, element 106; para 0022 and 0025) comprises at least one of: CT, MRI, fluoroscopy, ultrasound data, or medical imaging data obtaining using other modality; ECG data or other medical measurement data; volumetric data or slice data; or time series data (fig. 1, element 106; para 0022 and 0025). Regarding claim 17, the apparatus of claim 1, Fink further discloses wherein the one or more properties of the identified abnormal region comprise at least one of: intensity, texture, shape, location, and a measure of abnormality of at least the abnormal portion (para 0027-0029). Regarding claim 18, the apparatus of claim 1, Fink further discloses wherein the one or more entities comprise at least one of a finding, an impression or other observable, wherein the entity is associated with a pathology (para 0023-0024), and wherein the one or more attributes associated with the entity may comprise attributes associated with anatomical location or region, an anatomical distribution, laterality, severity, or a level of certainty (para 0023-0024). Regarding claim 19, the apparatus of claim 1, Fink further discloses wherein the matching process is further based on further information obtained from the medical text data, for example, author information and/or a measure of quality and/or content of the medical text data and/or other metadata (para 0022 and 0030). Regarding claim 20, this claim recites substantially the same limitations that are performed by claim 1 above, and it is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Zhang et al., “Contrastive Learning of Medical Visual Representations from Paired Images and Text” discloses ConVIRT, an alternative unsupervised strategy to learn medical visual representations by exploiting naturally occurring paired descriptive text. Tanwani, US 2023/0386646 discloses using deep learning models to interpret medical images with natural language. Mahmood et al., US 2025/0200748 discloses systems and methods are provided for analysis of pathology data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAN D HUYNH whose telephone number is (571)270-1937. The examiner can normally be reached 8AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAN D HUYNH/Primary Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Mar 13, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602798
METHOD AND APPARATUS FOR GENERATING SUBJECT-SPECIFIC MAGNETIC RESONANCE ANGIOGRAPHY IMAGES FROM OTHER MULTI-CONTRAST MAGNETIC RESONANCE IMAGES
2y 5m to grant Granted Apr 14, 2026
Patent 12602784
MEDICAL DEVICE FOR TRANSCRIPTION OF APPEARANCES IN AN IMAGE TO TEXT WITH MACHINE LEARNING
2y 5m to grant Granted Apr 14, 2026
Patent 12594046
METHOD AND APPARATUS FOR ASSISTING DIAGNOSIS OF CARDIOEMBOLIC STROKE BY USING CHEST RADIOGRAPHIC IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12586186
JAUNDICE ANALYSIS SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12582345
Systems and Methods for Identifying Progression of Hypoxic-Ischemic Brain Injury
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+13.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 721 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month