Prosecution Insights
Last updated: April 19, 2026
Application No. 18/523,598

CONTEXTUAL PROCESSING OF ULTRASOUND DATA

Non-Final OA §103
Filed
Nov 29, 2023
Examiner
VO, QUANG N
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Fujifilm Healthcare Americas Corporation
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
80%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
439 granted / 612 resolved
+9.7% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
23 currently pending
Career history
635
Total Applications
across all art units

Statute-Specific Performance

§101
13.4%
-26.6% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 612 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of claims 1-12 and claims 13-20 cancelled in the reply filed on 02/12/2026 is acknowledged. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11 are rejected under 35 U.S.C. 103 as being unpatentable over Irani (US 11,205,520 B1). Regarding claim 1, Irani discloses a system (e.g., FIG. 1 further shows a physician-guided machine learning system 132, paragraph 36) comprising: a computing device coupled to the archiver and configured to generate a benchmarking report that includes a comparison of the benchmarking data across the ultrasound examinations (e.g., at step 1110, the benchmark applicator 264 may use the benchmarks B set by the medical professional to limit the set of images in the anatomically arranged archive 220 to be processed to a first reduced set (FRS). For example, if the medical professional selects a non-contrast MRI in the axial plane as benchmarks B (e.g., because the user case (UC) is a non-contrast MRI in the axial plane), the benchmark applicator 264 may eliminate from further consideration those images in the archive 220 that do not meet these benchmarks. Put another way, the benchmark applicator 264 may limit the first reduced set (FRS) to non-contrast MRIs in the axial plane (and exclude, for example: MRI scans in the sagittal and coronal planes, all X-rays, all CT scans, all PET scans, all ultrasounds, et cetera), paragraph 82); and a display device coupled to the computing device and configured to display the benchmarking report including the comparison (e.g., the disclosed system may then: (a) process images of other patients to locate images that meet the benchmarks B, including using image processing techniques, in an anatomically arranged archive that contains images and medical exam results of other patients; and (b) display results for evaluation by the medical professional. The machine learning techniques (e.g., the deep learning model(s)) employed by the system to evaluate the user case may depend on the particular searching benchmarks B set by the medical professional, paragraph 27). Irani does not specifically disclose an archiver configured to maintain benchmarking data for ultrasound examinations, the benchmarking data including one or more of ultrasound data generated as part of the ultrasound examinations, contextual data representing a context for the ultrasound examinations, scheduling data representing schedules for the ultrasound examinations, and reporting data representing medical reports generated for the ultrasound examinations. Irani discloses an archiver configured to maintain data for ultrasound examinations, data including one or more of ultrasound data generated as part of the ultrasound examinations, contextual data representing a context for the ultrasound examinations, scheduling data representing schedules for the ultrasound examinations, (e.g., Bin 224N indicates that the anatomically arranged archive 220 may have any reasonable number of MRI bins. In like manner, the storage bins 225A-225N, 226A-226N, and 227A-227N may be respectively configured to store CTs, PETs, and ultrasounds that are grouped together by the archiver 250, paragraph 46; and Irani discloses the benchmarking data and reporting data representing medical reports generated for the ultrasound examinations (e.g., the system may then employ machine learning techniques (e.g., deep learning models) on the reduced dataset to find for the medical professional images and information that are responsive to the medical professional's query. The results may be arranged so as to allow the medical professional to rapidly and dynamically shift the focus to those results that the medical professional considers to be most relevant. In this way, and in part because of searching benchmarks B (including the weighted criteria (WCR) and the binary criteria (BCR) thereof selected by the medical professional specifically to formulate a tailored search for that user case), problems associated with the prior art diagnosis-focused systems may be avoided. A medical imaging evaluation (such as a radiology read being performed by a radiologist or a diagnostic evaluation by another qualified medical professional) may thus be greatly improved, paragraph 28). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to have modified Irani to include an archiver configured to maintain benchmarking data for ultrasound examinations, the benchmarking data including one or more of ultrasound data generated as part of the ultrasound examinations, contextual data representing a context for the ultrasound examinations, scheduling data representing schedules for the ultrasound examinations, and reporting data representing medical reports generated for the ultrasound examinations by combining the above teaching. It would have been obvious to one of ordinary skill in the art at the time of the invention to have modified and combining Irani’s teaching for assisting a physician in finding medically relevant images and information based on physician-defined criteria. Regarding claim 2, Irani discloses wherein the benchmarking data includes the ultrasound data obtained from at least two ultrasound equipment manufacturers (e.g., ultrasound scanners, paragraph 17), and the benchmarking report includes a visual representation that compares the ultrasound data obtained from the at least two ultrasound equipment manufacturers (e.g., the benchmark applicator 264 may eliminate from further consideration those images in the archive 220 that do not meet these benchmarks. Put another way, the benchmark applicator 264 may limit the first reduced set (FRS) to non-contrast MRIs in the axial plane (and exclude, for example: MRI scans in the sagittal and coronal planes, all X-rays, all CT scans, all PET scans, all ultrasounds, et cetera), paragraph 82). Regarding claim 3, Irani discloses wherein the benchmarking data includes the reporting data for the medical reports generated by at least two clinicians who performed the ultrasound examinations, and the benchmarking report compares submission timeliness for the medical reports generated by the at least two clinicians (e.g., The disclosed system may then: (a) process images of other patients to locate images that meet the benchmarks B, including using image processing techniques, in an anatomically arranged archive that contains images and medical exam results of other patients; and (b) display results for evaluation by the medical professional. The machine learning techniques (e.g., the deep learning model(s)) employed by the system to evaluate the user case may depend on the particular searching benchmarks B set by the medical professional, paragraph 27). Regarding claim 4, Irani discloses wherein archiver is implemented to obtain the benchmarking data from at least two departments within a care facility, and the benchmarking report compares the at least two departments based on the benchmarking data (e.g., they are configured to provide the medical professional with information, filtered by benchmarks set by the medical professional in line with the requirements of the particular case, to facilitate the finding of a historical twin and ultimately assist the medical professional in making a diagnosis. The term “medical professional,” as used herein, means an imaging expert that can take and/or examine a medical image, such as a radiologist or other physician, a medical resident or student, an imaging technician, et cetera), paragraph 21. Note: medical professionals which include different departments). Regarding claim 5, Irani discloses wherein the benchmarking data includes the scheduling data that indicates numbers of the ultrasound examinations performed within the at least two departments, and the comparison indicates the numbers of the ultrasound examinations (e.g., The medical imaging station 102 may include a medical imaging device 104. The medical imaging device 104 may be one or more medical imaging devices, such as an MRI scanner, a CT scanner, an X-ray machine, an ultrasound scanner, a PET scanner, or another medical imaging modality. The medical imaging device 104 may be usable to generate a two-dimensional image 106 representative of the patient (e.g., representative of a limb, organ, or other body part of the patient). The medical imaging device 104 may also be usable to take a series of such two-dimensional images 106 (which is the benchmarking), which may in some cases be compiled into three-dimensional models, paragraph 32). Regarding claim 6, Irani discloses wherein the benchmarking data includes the scheduling data that indicates types of the ultrasound examinations performed within the at least two departments, and the comparison indicates the types of the ultrasound examinations (e.g., The disclosed machine learning systems and methods are not configured to make any diagnosis. Rather, they are configured to provide the medical professional with information, filtered by benchmarks set by the medical professional in line with the requirements of the particular case, to facilitate the finding of a historical twin and ultimately assist the medical professional in making a diagnosis. The term “medical professional,” as used herein, means an imaging expert that can take and/or examine a medical image, such as a radiologist or other physician, a medical resident or student, an imaging technician, et cetera, paragraph 21). Regarding claim 7, Irani discloses wherein archiver is implemented to obtain the benchmarking data from at least two care facilities, and the benchmarking report compares the at least two care facilities based on the benchmarking data (e.g., the archiver 250 may use the AI arranger keys 252, together with the image evaluator 258, to process the images and determine the anatomy shown in each image, so as to allow each image to be further tagged by anatomy. Thus, each image may be tagged by both the metadata (by the metadata examiner 251) and the anatomy (by the image evaluator 258 employing the appropriate AI arranger keys 252). The tagged images may be grouped together (e.g., in electronic storage “bins”) (which is the benchmarking) in the anatomically arranged archive 220 based on the metadata and determined anatomy to allow for efficient processing thereof, paragraph 43). Regarding claim 8, Irani discloses wherein archiver is implemented to obtain the benchmarking data including the scheduling data that indicates scheduled times for when the ultrasound examinations were scheduled to occur and actual times for when the ultrasound examinations did occur, and the comparison indicates differences between the scheduled times and the actual times (e.g., the number of cases a medical professional is required to evaluate on a daily basis remains largely unchanged. For example, an emergency room radiologist may be required to evaluate over 200 cases a day even today. These performance requirements limit the amount of time the medical professional may be able to devote to each image. In some cases, to meet workload demands, a medical professional (e.g., a radiologist) may be required to review an image every three to four seconds. Such workload demands may fatigue the medical professional reviewing the medical images and impact interpretation accuracy, paragraph 18). Regarding claim 9, Irani discloses wherein at least one of the computing device and the display device is implemented to receive a user input indicating report times, and the display device is implemented to obtain and display the benchmarking report according to the report times (e.g., The method includes displaying the user image using a graphical user interface. The method comprises receiving, in connection with the user image and via the graphical user interface, each of: (i) a region of interest; (ii) selections for binary criteria, the binary criteria including modality, gender, and age; and (iii) weights of weightable criteria, the weightable criteria including each of an anatomical location of said region of interest and features of said region of interest. The method comprises using a processor to identify medical image results in the anatomically arranged archive based on the selections for the binary criteria and the weights of the weightable criteria. The method includes using the processor to group the medical image results based on previously reported medical reports (which is the benchmarking), and displaying the grouped medical image results on a display, paragraph 3). Regarding claim 10, Irani discloses wherein at least one of the computing device and the display device is implemented to receive a user input indicating a selection of the benchmarking data to include in the comparison included in the benchmarking report (e.g., The method includes displaying the user image using a graphical user interface. The method comprises receiving, in connection with the user image and via the graphical user interface, each of: (i) a region of interest; (ii) selections for binary criteria, the binary criteria including modality, gender, and age; and (iii) weights of weightable criteria, the weightable criteria including each of an anatomical location of said region of interest and features of said region of interest. The method comprises using a processor to identify medical image results in the anatomically arranged archive based on the selections for the binary criteria and the weights of the weightable criteria. The method includes using the processor to group the medical image results based on previously reported medical reports, and displaying the grouped medical image results on a display, paragraph 3). Regarding claim 11, Irani discloses wherein at least one of the computing device and the display device is implemented to receive a user input indicating a delivery method of the benchmarking report to the display device (e.g., The method includes displaying the user image using a graphical user interface. The method comprises receiving, in connection with the user image and via the graphical user interface, each of: (i) a region of interest; (ii) selections for binary criteria, the binary criteria including modality, gender, and age; and (iii) weights of weightable criteria, the weightable criteria including each of an anatomical location of said region of interest and features of said region of interest. The method comprises using a processor to identify medical image results in the anatomically arranged archive based on the selections for the binary criteria and the weights of the weightable criteria. The method includes using the processor to group the medical image results based on previously reported medical reports, and displaying the grouped medical image results on a display, paragraph 3). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Irani (US 11,205,520 B1) as applied to claims 1, 11 above, and further in view of Zhang (US 2021/0027485 A1). Regarding claim 12, Irani does not specifically disclose wherein the delivery method includes at least one of email delivery and access via a benchmarking application configured to display a dashboard. Heath discloses wherein the delivery method includes at least one of email delivery and access via a benchmarking application configured to display a dashboard (e.g., Real-time reports, daily emails, and operations benchmarks help managers and owners make better operating decisions every day, paragraph 71). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to have modified Irani to include wherein the delivery method includes at least one of email delivery and access via a benchmarking application configured to display a dashboard as taught by Zhang. It would have been obvious to one of ordinary skill in the art at the time of the invention to have modified Irani by the teaching of Zhang to improve employee engagement, task completion, and saves hours of follow-up time for store and district managers resulting in more consistent and efficient operations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUANG N VO whose telephone number is (571)270-1121. The examiner can normally be reached Monday-Friday, 7AM-4PM, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached at 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /QUANG N VO/Primary Examiner, Art Unit 2683
Read full office action

Prosecution Timeline

Nov 29, 2023
Application Filed
Mar 01, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592002
COLOR CONVERSION SYSTEM, COLOR CONVERSION METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12577842
METHOD AND SYSTEM FOR MEASURING VOLUME OF A DRILL CORE SAMPLE
2y 5m to grant Granted Mar 17, 2026
Patent 12581023
GREYSCALE IMAGES
2y 5m to grant Granted Mar 17, 2026
Patent 12572996
FRACTIONALIZED TRANSFERS OF SENSOR DATA FOR STREAMING AND LATENCY-SENSITIVE APPLICATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12573172
IMAGE OUTPUTTING DEVICE AND IMAGE OUTPUTTING METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
80%
With Interview (+8.3%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 612 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month