Prosecution Insights
Last updated: April 19, 2026
Application No. 19/106,284

GUIDED ULTRASOUND IMAGING FOR POINT-OF-CARE STAGING OF MEDICAL CONDITIONS

Non-Final OA §101§103
Filed
Feb 25, 2025
Examiner
KLEIN, BROOKE L
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Koninklijke Philips N V
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
102 granted / 197 resolved
-18.2% vs TC avg
Strong +55% interview lift
Without
With
+55.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
51 currently pending
Career history
248
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
32.7%
-7.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 197 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 15 is objected to because of the following informalities: claim 15 recites “the display”, however no display has been set forth previously. The limitation should read –a display—4. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception in the form of an abstract idea without significantly more. In a test for patent subject matter eligibility, the claims pass Step 1 (see 2019 Revised Patent Subject Matter Eligibility), as they are related to a process, machine, manufacture, or composition of matter. When assessed under Step2A, Prong I, Independent claims 1 and 15 are found to recite a judicial exception (i.e. abstract idea). In this instance, claims 1 and 15 recite the limitation “identify/ing a first image feature associated with a medical condition of the patient anatomy within the first ultrasound image and a second image feature associated with the medical condition within the second ultrasound image”, “determine/determining a first sub-score for the first image feature and a second sub-score for the second image feature”, and “determine/determining a staging value representative of a progression of the medical condition based on the first sub-score and the second sub-score”. The cited limitations, under their broadest reasonable interpretation, encompass a mental process (i.e. abstract idea) of identifying and determining which can be performed in the mind or by a human using a pen and a paper (e.g. observation, evaluation, judgment, opinion). In other words, a person could reasonably identify features within images via observation/evaluation, determine scores for the features via evaluation/judgment/opinion, and determine a staging value via evaluation/judgment/opinion. Examiner notes that with the exception of generic computer-implemented steps (e.g. “a processor” recited in claim 1 and “computer implemented method” recited in claim 15), there is nothing in the claims that preclude the limitation from being performed by a human, mentally or with pen and paper, thus the cited limitation(s) recites a judicial exception (MPEP 2106.04(a)) and the claim must be reviewed under Step 2A, Prong II to determine patent eligibility. Step 2A, Prong II determines whether any claim recites an additional element that integrates the judicial exception into a practical application. Independent claims recites the following additional element(s): A processor configured for communication with a display and a transducer array of an ultrasound probe (Claim 1) Control/controlling the transducer array to obtain a first ultrasound image corresponding to a first view of a patient anatomy and a second ultrasound image of a patient anatomy corresponding to a second view of the patient anatomy; Output/outputting, to the display a screen display comprising: the staging value; and at least one of: the first ultrasound image, an indication of the first image feature in the first ultrasound image, and the first sub-score; or the second ultrasound image, an indication of the second image feature in the second ultrasound image, and the second sub-score. The additional elements in the cited independent claims are not found to integrate the judicial exception into a practical application. In this case, the processor configured for communication with a display and a transducer array of the ultrasound probe is seen as merely a generic component of an ultrasound system and does not more than link the judicial exception to a particular technological environment or field of use and further amounts to merely applying the judicial exception with a generic computer. Controlling the transducer array to obtain first and second ultrasound images is seen as merely insignificant pre-solution activity of data gathering and outputting, to the display, the display screen as recited is seen as merely insignificant post-solution activity of outputting/displaying data/results. These elements are seen as adding insignificant extra-solution activity to the judicial exception. Therefore, under step 2A Prong II the Judicial exception is not integrated into a practical application by additional elements of independent claims 1 and 15 and the claims must be reviewed under Step 2B to determine patent eligibility. Step 2B determines where a claim amounts to significantly more. The additional elements listed above do not amount to significantly more than the judicial exception. In this instance, as noted above the additional elements are seen as merely linking the judicial exception to a particular technological environment/field of use and applying the judicial exception with a generic computer as well as insignificant extra-solution activity of data gathering and displaying results. Additionally there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. Therefore, under Step 2B in a test for patent subject matter eligibility, the judicial exception of the independent claim(s) do not amount to significantly more and the independent claim(s) remain patent ineligible. Dependent claims 2-14 further limit the abstract idea of independent claim 1. When analyzed as a whole, these claims are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed towards an abstract idea and do not sufficiently integrate the subject matter into a practical application or recite elements which constitute significantly more than the abstract ideas identified. The dependent claims are directed toward additional elements which encompass abstract ideas In this instance, dependent claims recite the following limitations: “Determine a quality associated with the first ultrasound image before identifying the first image feature within the first ultrasound image” (claim 6) The cited limitation(s), under their broadest reasonable interpretation, encompass mental processes (i.e. abstract idea) which can be performed in the mind or by a human using a pen and a paper (e.g. observation, evaluation, judgment, opinion). In other words, a human could reasonably determine a quality associated with the first ultrasound image . Examiner notes that with the exception of generic computer-implemented steps (e.g. the processor), there is nothing in the claims that preclude the limitation from being performed by a human, mentally or with pen and paper, thus the claimed limitation is considered to be directed towards a judicial exception (MPEP 2106.04(a)). Under Step 2A, Prong II for dependent claims 2-14, present additional elements which only further narrow the judicial exceptions (e.g. claim 2 which further recites outputting to the display user guidance which amounts to merely insignificant extra-solution activity of displaying information, claims 3-5 which merely narrow the nature of the user guidance, claim 7 which further narrows the nature of the quality being based on a comparison of images, claim 8 which merely recites identifying the first image feature and obtaining further ultrasound image corresponding to the first view in its BRI is a contingent limitation and the processor must merely be capable of performing this function regardless of whether the condition has been met, thus amounts to merely insignificant extra-solution activity of data gathering, claims 9-11 which further recite a machine learning algorithm/multi-task learning model which are recited with such high generality that they amount to merely a generic computer, claim 12 which further narrows the nature of the patient anatomy and the medical condition, claim 13 which further narrow the nature of the first sub-score and the second sub-score , and claim 14 which further narrows the nature of the first image feature and second image feature) and provide no additional element which are found to integrate the judicial exception into a practical application. These dependent claims include no additional claims that are sufficient to amount to significantly more than the judicial exception. Additionally, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. As discussed above with respect to integration of the abstract idea into a practical application, the additional claims do not provide any additional elements that would amount to significantly more than the judicial exception. Under Step 2B, these claims are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9-13, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Klochko et al. (US 20220148172 A1), hereinafter Klochko. Regarding claims 1 and 15, Klochko teaches an ultrasound system (at least fig. 2 (10) and corresponding disclosure in at least [0018]) comprising: A processor (at least fig. 2 (20) and corresponding disclosure in at least [0018]) configured for communication with a display (at least fig. 2 (22b) and correspond disclosure in at least [0025]) and a transducer array of an ultrasound probe (at least fig. 2 (18) and corresponding disclosure in at least [0025]), wherein the processor is configured to: Control the transducer array ([0025] which discloses the CPU 20 may be configured to control the provision of electrical current to the transducer probe 18 to emit sound waves, and to receive electrical pulses generated in response to soundwaves or echoes received by the transducer probe 18) to obtain a first ultrasound image corresponding to a first view of a patient anatomy and a second ultrasound image of a patient anatomy corresponding to a second view of the patient anatomy (at least fig. 3 (102) and corresponding disclosure in at least [0051] which discloses multiple images acquired in step 102. See also [0058] which discloses if there are N number of images, step 114 comprises calculating N scores and then calculating an overall average score for the collection of N images by adding all of the individual image scores together and dividing by the number of images for which scores were calculated (i.e., N). Similarly, in an instance wherein the images acquired in step 102 are taken along multiple imaging planes (e.g., one or more images along a long axis and one or more images along a short axis), scores may be calculated for each individual image and then an average score may be calculated for each imaging plane from those individual scores (e.g., an average score of the long axis and an average score for the short axis); Identify a first image feature associated with a medical condition of the patient anatomy within the first ultrasound image and a second image feature associated with the medical condition within the second ultrasound image (at least fig. 3 (110) and corresponding disclosure in at least [0041] and [0051] and [0046] which discloses the regions of interest may be identified by an electronic processor, for example, the electronic processor 12 of the system 10); Determine a first sub-score for the first image feature and a second sub-score for the second image feature (at least fig. 3 (114) and corresponding disclosure in at least [0052] and [0058] which discloses the evaluating step 114 may be performed for each of th e acquired images such that a score is calculated for each acquired image); Determine a staging value (at least fig. 3 (120) and corresponding disclosure in at least [0066]) representative of a progression of the medical condition based on the first sub-score and the second sub-score ([0066] which discloses step 120 may also include assigning an indication or grade as to the severity of the condition (e.g., mild, moderate, severe). In at least some embodiments, the score calculated in step 114 may be used to assign such a grade. For example, one or more predetermined, empirically-derived thresholds or threshold ranges, each corresponding to a particular grade (e.g., mild, moderate, severe) may be stored in an electronic memory and may be used along with the calculated score to assign a grade to the medical condition and [0058] which discloses step 114 comprises calculating N scores and then calculating an overall average score for the collection of N images by adding all of the individual image scores together and dividing by the number of images for which scores were calculated (i.e., N). Similarly, in an instance wherein the images acquired in step 102 are taken along multiple imaging planes (e.g., one or more images along a long axis and one or more images along a short axis), scores may be calculated for each individual image and then an average score may be calculated for each imaging plane from those individual scores (e.g., an average score of the long axis and an average score for the short axis). The average scores for the different imaging planes may then be combined using a statistical combination (e.g., statistical mapping) to determine an overall score for the collection of acquired images. Examiner thus notes that step 120 as disclosed in [0066] is based on both the first sub-score and second sub-score. Examiner notes that either of the overall score or grade is considered a staging value in its broadest reasonable interpretation); and Output, to the display, a screen display ([0025] which discloses the CPU 20 may also be configured to process data and generate images that are displayed on one of the user interfaces 22 and [0026] which discloses for example, one or more user interfaces 22 (user interface 22b in FIG. 2) may display images and/or other data generated by the system 10) comprising: An indication representative of the detection of the presence or absence of the medical condition to be provided ([0067]) At least one of: The first ultrasound image and an indication of the first image feature in the first ultrasound image (see at least fig. 4); or The second ultrasound image and an indication of the second image feature in the second ultrasound image (See at least fig. 4) Klochko fails to explicitly teach the screen display including any of the staging value and first or second sub-scores, however, Klochko as noted above in [0026] discloses that the user interface may display images and/or other data generated by the system 10, where examiner notes that the scores/grades are considered data generated by the system. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date to have included in the screen display the staging value and the first or second sub-score in order to allow a user to visualize data corresponding to the first and/or second image (e.g. the sub-scores generated) for verification thereof by the user and to further allow a user to visualize the grade which was determined by the processor such that the user may evaluate the grade to provide a diagnosis/prognosis accordingly. Regarding claim 9, Klochko further teaches wherein, to determine the first sub-score and the second sub-score, the processor is further configured to implement a first machine learning algorithm ([0017] which discloses the systems and methods described herein are directed to artificial intelligence-driven detection of medical conditions using one or more machine learning models at one or more steps of the detection process and [0068] which discloses the evaluating step 114 and detecting step 120 may alternatively comprise using a trained machine learning model to evaluate the regions of interest identified in step 110 and to detect the presence or absence of the medical condition based thereon) Regarding claim 10, Klochko further teaches wherein the first machine learning algorithm comprises a multi-task learning model ([0017] which discloses the systems and methods described herein are directed to artificial intelligence-driven detection of medical conditions using one or more machine learning models at one or more steps of the detection process, where it is noted that one machine learning model applied at one or more steps of the detection process is considered a “multi-task” learning model) Claim 11, Klochko further teaches wherein, to identify the first image feature and the second image feature, the processor is configured to implement a second machine learning algorithm different than the first machine learning algorithm ([0046] which discloses step 110 may comprise applying a machine learning model or algorithm trained to perform image recognition to the or each of the acquired images to identify the regions of interest and the systems and methods described herein are directed to artificial intelligence-driven detection of medical conditions using one or more machine learning models at one or more steps of the detection process where one or more machine learning models indicates different machine learning algorithms and further it is noted that the different functions would use different algorithms) Regarding claim 12, Klochko further teaches wherein the patient anatomy comprises a liver ([0040] which discloses and wherein the medical condition comprises steatosis of the liver or other types of liver damage, the area of interest comprises an area of the patient's body that includes the patient's liver), and wherein the medical condition comprises hepatic steatosis ([0033] which discloses these medical conditions may include, for example and without limitation, one or more of: diabetes (e.g., type 2 diabetes); prediabetes; muscle atrophy/fatty infiltration (e.g., atrophy/fatty infiltration of rotator cuff muscles); and steatosis of the liver, to cite just a few examples) Regarding claim 13, Klochko further teaches wherein the first sub-score for the first image feature and second sub-score for the second image feature correspond to ultrasonographic fatty liver indicator ([0040] which discloses and wherein the medical condition comprises steatosis of the liver or other types of liver damage, the area of interest comprises an area of the patient's body that includes the patient's liver and ([0033] which discloses these medical conditions may include, for example and without limitation, one or more of: diabetes (e.g., type 2 diabetes); prediabetes; muscle atrophy/fatty infiltration (e.g., atrophy/fatty infiltration of rotator cuff muscles); and steatosis of the liver, to cite just a few examples). Therefore, in the case of steatosis which refers to a fatty liver it is noted that any scores/sub-scores correspond to ultrasonographic fatty liver indicator in its broadest reasonable interpretation) Claims 2-5 rejected under 35 U.S.C. 103 as being unpatentable over Klochko as applied to claim 1 above, and further in view of Takeuchi (US 20040019270 A1), hereinafter Takeuchi. Regarding claim 2, Klochko, as modified, teaches the elements of claim 1 as previously stated. Klochko fails to explicitly teach wherein the processor is further configured to: output, to the display user guidance to obtain the first ultrasound image corresponding to the first view of the patient anatomy. Takeuchi, in a similar field of endeavor involving ultrasound imaging, teaches wherein a processor is configured to display user guidance (at least figs. 6 and 7A (42 and 46) and corresponding disclosure in at least [0055]-[0056] and [0060]) to obtain a first ultrasound image corresponding to a view of patient anatomy. It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Klochko to include displaying user guidance as taught by Takeuchi in order to make manipulations easier and adequate for non-specialized or less experienced physicians or technicians (Takeuchi [0098]) such that a desired view of the anatomy may be acquired. Regarding claim 3, Klochko, as modified, teaches the elements of claim 2 as previously stated. Takeuchi, as applied to claim 2 above, further teaches wherein the user guidance comprises a graphical representation of a probe and/or orientation for the ultrasound probe (see at least fig. 7A depicting both a graphical representation of a probe and an orientation for the ultrasound probe in 46). Regarding claim 4, Klochko, as modified, teaches the elements of claim 2 as previously stated. Takeuchi, as applied to claim 2 above, further teaches wherein the user guidance comprises a reference image (42) associated with the view of the patient anatomy. Regarding claim 5, Klochko, as modified, teaches the elements of claim 2 as previously stated. Takeuchi, as applied to claim 2 above, further teaches wherein the user guidance comprises a description of dynamic behavior associated with the view of the patient anatomy ([0049] which discloses he probe movement information 46 can be calculated from the reference position information in relation to the reference image used to acquire the following diagnosis image and the position information of the ultrasonic probe 12 currently detected by the position detector 13 and [0060] which discloses [0060] FIG. 7A is a view showing a display example of the navigation information (reference image 42 and probe movement information 46) displayed when the operator moves to the acquisition of the following diagnosis image. The ultrasonic probe movement information 46 is displayed as a view showing the relation between the ultrasonic probe 12 at the current position and the ultrasonic probe 12 at the position at which the reference image can be acquired. The probe movement information 46 shown in the drawing comprises a probe A (solid line) indicating the current position and posture of the ultrasonic probe 12, and a probe B (dotted line) indicating the position and the posture to which the ultrasonic probe 12 has to be moved. The probe B is displayed as a still image at specific position and posture so that the probe A can be moved in association with a movement of the ultrasonic probe 12 [0061] Then, the operator positions the ultrasonic probe 12 by controlling the position and the posture of the ultrasonic probe 12 so that the probe A displayed in a solid line superposes the probe B displayed in a dotted line while watching the probe movement information 46 (Step S10). Probe movement information as depicted/disclosed is considered a description of dynamic behavior (e.g. probe movement) associated with the view of the patient anatomy (i.e. to move the probe to the view of the patient anatomy)) Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Klochko as applied to claim 1 above, and further in view of Aladahalli et al. (US 20210145411 A1), hereinafter Aladahlli. Regarding claim 6, Klochko, as modified, teaches the elements of claim 1 as previously stated. Klochko, as modified, fails to explicitly teach wherein the processor is further configured to determine a quality associated with the first ultrasound image before identifying the first image feature within the first ultrasound image. Aladahalli, in a similar field of endeavor involving ultrasound imaging, teaches a processor configured to determine a quality associated with a first ultrasound image before identifying a first image feature within a first ultrasound image (at least fig. 3A (312) and corresponding disclosure in at least [0042]) It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Klochko, as currently modified, to include determining a quality as taught by Aladahlli in order ensure that the reliability to correctly identify and segment the first image feature is appropriate, such a modification provides a gate-keeper, deploying the image interpretation and analysis algorithm only when the image quality is high and the amount of turbulence is below the threshold (Aladahalli [0015]) Regarding claim 7, Klochko, as modified, teaches the elements of claim 6 as previously stated. Aladahalli, as applied to claim 6 above, further teaches the processor is further configured to determine the quality based on a comparison between the first ultrasound image and a reference image associated with the first view of the patient anatomy ([0036] which discloses method 300 includes determining an amount of turbulence between at least two successive scan images (scan images are alternatively referred to herein as frames) for a duration of time. For example, the amount of turbulence may be calculated between the first reference image and the second subsequent image). Regarding claim 8, Klochko, as modified, teaches the elements of claim 6 as previously stated. Aladahalli, as applied to claim 6 above further teaches wherein the processor is further configured to: If the quality satisfies a threshold, identify the first image feature within the first ultrasound image (at least fig. 3A (326) and corresponding disclosure in at least [0043] which discloses operating the ultrasound scanning in the static mode may include deploying the one or more desired image interpretation algorithm during scanning the acquired scan image may be used as input to the segmentation algorithm) If the quality does not satisfy the threshold, control a transducer array to obtain a further ultrasound image corresponding to the first view of the patient anatomy (at least fig. 3A (316) and corresponding disclosure in at least [0046]) Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Klochko as applied to claim 1 above, and further in view of NPL Lin et al. (“Sonographic fatty liver, overweight and ischemic heart disease”), hereinafter Lin. Regarding claim 14, Klochko, as modified, teaches the elements of claim 1. Klocho fails to explicitly teach wherien the first image feature and the second feature each comprise a different one of: liver-kidney contrast, posterior attenuation, vessel blurring, gallbladder visualization, diaphragmatic attenuation visualization, or focal sparing. Nonetheless, Lin, in a similar field of endeavor involving ultrasound evaluation, teaches wherein first and second image features comprise a different one of: liver-kidney contrast, posterior attenuation, vessel blurring, gallbladder visualization, diaphragmatic attenuation visualization, or focal sparing (pg. 4839 which discloses Severity of fatty liver was classified according to the following modified scoring system: brightness compared to kidneys (0-3), blurring of gall bladder wall (0-3), blurring of hepatic veins (0-3), blurring of portal vein (0-3), far gain attenuation (0-3)). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified the first image feature and the second image feature to comprise a different one of: liver-kidney contrast, posterior attenuation, vessel blurring, gallbladder visualization, diaphragmatic attenuation visualization, or focal sparing as taught by Lin in order to provide an advance staging score for fatty liver. Such a modification would allow for additional scoring of the image features such as disclosed by Lin for defining severity of the fatty liver accordingly (see Lin pg. 4839 which discloses Severity was defined as mild (total scores of 2-6), moderate (7-10), and sever (11-15) fatty liver). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ferarraioli et al. (“Ultrasound-based techniques for the diagnosis of liver steatosis”) teaches the first image feature and the second feature each comprise a different one of: liver-kidney contrast, posterior attenuation, vessel blurring, gallbladder visualization, diaphragmatic attenuation visualization, or focal sparing (pg. 6055 Hamaguchi score to calculate the Hamaguchi score, four US findings, including hepatorenal echo contrast, bright liver, deep attenuation, and vessel blurring, are evaluated. Bright liver and hepato-renal contrast are evaluated together: The score goes from 0 to 3, if they are both negative the final score is zero. Deep attenuation goes from 0 to 2, and vessel blurring can be positive (score 1) or negative (score 0)[13]. In a series of 94 patients undergoing liver biopsy, Hamaguchi et al[13] found that a score ≥ 2 had an AUROC of 0.98 with 91.7% sensitivity and 100% specificity for diagnosing NAFLD. A score ≥ 1 was able to diagnose visceral obesity, with 68.3% sensitivity and 95.1% specificity. It needs to be underlined that this score has not been validated yet in large series of patients. US-FLI score The US-FLI score is based on the following features: Liver/kidney contrast, attenuation of the US beam, poor vessel visualization, difficult visualization of the gallbladder wall, poor visualization of the diaphragm, and presence of fatty sparing areas) Any inquiry concerning this communication or earlier communications from the examiner should be directed to BROOKE L KLEIN whose telephone number is (571)270-5204. The examiner can normally be reached Mon-Fri 7:30-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at 5712700552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BROOKE LYN KLEIN/Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Feb 25, 2025
Application Filed
Jan 30, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588896
ULTRASOUND DIAGNOSTIC APPARATUS AND CONTROL METHOD OF ULTRASOUND DIAGNOSTIC APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12543953
VISUALIZATION FOR FLUORESCENT GUIDED IMAGING
2y 5m to grant Granted Feb 10, 2026
Patent 12544040
SHEAR WAVE IMAGING BASED ON ULTRASOUND WITH INCREASED PULSE REPETITION INTERVAL
2y 5m to grant Granted Feb 10, 2026
Patent 12539176
Fiber Optic Ultrasound Probe
2y 5m to grant Granted Feb 03, 2026
Patent 12514546
ULTRASONIC DIAGNOSIS DEVICE AND METHOD OF DIAGNOSING BY USING THE SAME
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
99%
With Interview (+55.3%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 197 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month