Prosecution Insights
Last updated: April 19, 2026
Application No. 18/644,472

ULTRASOUND DIAGNOSTIC APPARATUS

Final Rejection §102§103§112
Filed
Apr 24, 2024
Examiner
MCDONALD, JAMES F
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Fujifilm Healthcare Corporation
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
42 granted / 76 resolved
-14.7% vs TC avg
Strong +44% interview lift
Without
With
+44.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
33 currently pending
Career history
109
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
32.1%
-7.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 76 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This action is in response to Applicant’s remarks, filed on 1/5/2026. The amendments to claim(s) 1-7 have been entered. No claim(s) is/are cancelled by Applicant. New claim 8 has been entered. Accordingly, claim(s) 1-8 remain pending for examination. Response to Arguments Applicant’s arguments, see p. 6-11, with respect to the rejection of claim(s) 1-7 have been fully considered. After review of the Applicant’s remarks and amendment to the claim(s), Examiner agrees with the Applicant and the interpretations of claim limitations under 35 USC §112(f) have been withdrawn. Regarding the rejection(s) under 35 U.S.C. § 112, Examiner respectfully disagrees with the remarks and does not find Applicant’s arguments persuasive. New 35 U.S.C. § 112(b) rejections are issued in view of the amended claim(s). New grounds of rejection are made in view of the following: new amendments provided by Applicant and attached remarks; updated search and review of pertinent, eligible prior art; newly added claims; and/or different interpretation of the previously applied references. Regarding the rejection of claim(s) 1-7 under 35 U.S.C. § 102 and under 35 U.S.C. §103, Applicant provides the following: cited art Claims 1-5 were rejected under 35 U.S.C. § 102(a)(1) or 35 U.S.C. § 102(a)(2) as purportedly anticipated by Rao et a. (US 2016/0350620 Al). Claims 6 and 7 were rejected under 35 U.S.C. § 103 as purportedly unpatentable over Rao in view of Hu et al. (US 2023/0020442 Al). Applicant respectfully submits that the cited art does not disclose or suggest the aspects of ... an ultrasound diagnostic apparatus comprising ... a learning model that has been trained to output an image adjustment parameter suitable for learning ultrasound data, from input ultrasound data, by using learning data including (a) the learning ultrasound data obtained by transmitting and receiving ultrasound waves to and from a subject and (b) a training image adjustment parameter to be used in image quality adjustment processing on the learning ultrasound data, the ultrasound diagnostic apparatus ... performing a method comprising: ... determining a specific parameter, which is an image quality adjustment parameter for the specific region, based on output of the learning model when the ultrasound data of an inside of a boundary of the specific region is input to the trained learning model; and executing image quality adjustment processing of adjusting an image quality of the inside of the boundary of the specific region of the ultrasound tomographic image on the ultrasound data of the inside of the specific region based on the specific parameter, and executing image quality adjustment processing of adjusting an image quality of an outside of the boundary of the specific region of the ultrasound tomographic image on the ultrasound data of the outside of the specific region by using a predetermined parameter for image quality adjustment different from the specific parameter. Rao (US 2016/0350620 Al)., as understood by Applicant, proposes a method (as discussed Rao with reference to Rao's Fig. 1, reproduced below) and system (Rao's Fig. 6) for image enhancement in medical diagnostic ultrasound. According to Rao, knowledge-based detection of anatomy or artifact identifies locations to be enhanced, and the knowledge-based detection of the locations avoids identification of other anatomy or artifacts. In the approach proposed in Rao, the ultrasound system acquires ultrasound data from a scan of tissue of a patient, and a knowledge base is used to classify locations in the received ultrasound data (e.g., background, fluid, bone, tissue, such as organ of interest, etc.). In such approach, classifiers are applied to each location represented by the data or are applied to distinguish between different locations, and a processor classifies different locations represented by the ultrasound data as belonging to a class or not. Further, in the approach proposed in Rao, an image enhancement (for example, enhancing the organ of interest in comparison to the background) is executed based on the classification, and an ultrasound image is generated based on the ultrasound data to which the image enhancement is applied. Rao proposed that the knowledge base may be used by applying a machine-learnt classifier (which is learnt from the database of annotated images). In such learning proposed in Rao, annotated or ground-truth labeled images are used as training data, and the classification is learnt based on the ground truth and features extracted from the images of the knowledge base. Examiner respectfully disagrees with Applicant. First, the claims remain rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter. As provided in the rejection below the claims are newly rejected under 35 U.S.C. § 103 over Tsymbalenko et al. in view of Rao et al. [see claim 1 rejection]. In particular, the training of a learning model based on imaging parameters and identified anatomical features is found in the teachings of Tsymbalenko and the boundary of the specific region language is taught by Rao. Applicant asserts that: However, while the classifier in Rao is trained to detect locations of the anatomy and/or artifact and while Rao, [0043]-[0049], proposes various processes, Applicant does not find any suggestion anywhere in Rao of the aspects of ... determining a specific parameter, which is an image quality adjustment parameter for the specific region, based on output of the learning model when the ultrasound data of an inside of a boundary of the specific region is input to the trained learning model; and executing image quality adjustment processing of adjusting an image quality of the inside of the boundary of the specific region of the ultrasound tomographic image on the ultrasound data of the inside of the specific region based on the specific parameter, and executing image quality adjustment processing of adjusting an image quality of an outside of the boundary of the specific region of the ultrasound tomographic image on the ultrasound data of the outside of the specific region by using a predetermined parameter for image quality adjustment different from the specific parameter. Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Applicant has provided no evidence to establish an unobvious difference between the claimed product and the prior art, but rather has merely argued such alleged difference. Mere arguments can not take the place of evidence. In re Walters, 168 F.2d 79,80, 77 USPQ 609,610 (CCPA 1948); In re Cole, 326 F.2d. 769,773, 140 USPQ 230,233 (CCPA 1964); In re Schulze, 346 F.2d 600,602, 145 USPQ 716,718 (CCPA 1965); In re Lindner, 457 F.2d 506,508, 173 USPQ 356,358 (CCPA 1972); In re Pearson, 494 F.2d 1399,1405, 181 USPQ 641,646 (CCPA 1974); Meitzner v. Mindick, 549 F.2d 775,782, 193 USPQ 17,22 (CCPA), cert. Denied, 434 U.S. 854 (1977); In re DeBlauwe, 736 F.2d 699,705, 222 USPQ 191,196 (Fed. Cir. 1984). Regarding the teachings of Rao, Examiner respectfully disagrees with the Applicant and respectfully maintains that Rao does teach the limitations of “determining a specific parameter” as recited in independent claim 1. As discussed in the prior rejections, Rao provides a machine learning algorithm to apply classifiers to ultrasound images which detect anatomy and artifacts. Rao teaches: “Different classifiers are trained for different artifacts and/or anatomy. The same or different classifiers may be trained for different imaging situations, such a classifier for detecting a grating lobe artifact for imaging the heart and a different classifier for detecting a grating lobe artifact for imaging the liver. Configuration specific classifiers may be trained, such as one classifier for use with one transducer and corresponding frequency and another classifier for use with a different transducer and corresponding frequency. The same or different classifiers may be trained to detect different objects, such as one classifier for detecting an artifact and another classifier for detecting anatomy.” Rao [0039] (emphasis added) “Referring again to FIG. 1, the processor outputs an indication of the locations in act 16. An image is generated from the ultrasound data. The detected locations for a given anatomy or artifact are indicated in the image, such as by color, line graphic, probability map, or intensity. The image enhancement of act 20 is to be performed after receiving user confirmation in act 18 of the accuracy of the detection of act 14. The ultrasound system processes the image after revealing the output of the machine-learnt classifier. The knowledge-based image processing is made transparent to the user. The user is informed what is to be altered and why before the processor automatically alters large parts of an image. The user may edit the detection, such as changing the classification of one or more locations.” Rao [0040] (emphasis added) “In act 20, the processor, filter, ultrasound system, or combinations thereof enhances the detected ultrasound data as a function of the classification of the locations. In segmentation, detected data is removed or isolated. For image enhancement, the background, other tissue, fluid, other object, or other representation by the ultrasound data remains. Instead, the ultrasound data is altered to make some locations more visible relative to other locations, to fill in gaps, to enlarge, to reduce, to separate, and/or otherwise image process the already detected data.” Rao [0041] (emphasis added) The classifiers detect locations of anatomical features and artifact(s) (i.e., the boundaries of a specific region) within the ultrasound data and reconstruct an ultrasound image. Once the locations are known, further image processing (e.g., edge detection, spatial filtering, temporal filtering, etc.) may be performed in a location specific manner. The image processing algorithm implemented by Rao is the application of a ‘specific parameter’ within a boundary of regions detected by machine-learnt classifiers. Furthermore, Rao clearly teaches the selective application of different image processing techniques to distinct regions within an ultrasound image. As disclosed by Rao: “In another embodiment, amplitude scaling (e.g., reduction) or greater temporal persistence is applied to locations classified as spontaneous contrast. Different scaling (e.g., lesser), no scaling, or different persistence is applied to other locations. As seen in FIG. 4, the spontaneous contrast in the left ventrical may be distracting to the user. After amplitude scaling or temporal persistence, the spontaneous contrast is suppressed for the left ventricle as shown in FIG. 5. The spontaneous contrast in the other heart chamber is or is not also suppressed. Knowledge-based detection allows for distinguishing between locations for the same artifact. In the example of FIG. 5, the suppression is only for the spontaneous contrast in the left ventricle and not for other spontaneous contrast or the heart wall tissue.” Rao [0046] (emphasis added) In the figures provided by Rao, reduction amplitude scaling (e.g., specific parameter) may be applied to locations classified as spontaneous contrast within an identified boundary, while lesser scaling (e.g., predetermined parameter) is applied to other locations outside the location classified as spontaneous contrast (Rao [0025-0052], [fig. 1-6]). Examiner respectfully notes that Applicant’s arguments only address independent claim(s) 1, and no remarks regarding the subject matter of the dependent claim(s) have been presented. In view of the new rejection to independent claim(s) 1, the rejections to dependent claims 2-7 are modified to address Applicant’s amendments and are sustained. New claim 8 is rejected as discussed below. The rejections of claim(s) 1-8 under 35 U.S.C. § 103 are maintained. Claim Objections Claim 8 is objected to because of the following informalities: claim 8 appears to contain a typographical error and should correct the punctuation (i.e., “parameter for adjusting brightness/”) at the conclusion of the limitations. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim(s) 1-8 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim(s) 2-8 are also rejected at least by virtue of dependency upon a rejected base claim. Claim 1 recites the limitation "a specific region which is a portion of an ultrasound tomographic image corresponding to the ultrasound data obtained by transmitting and receiving the ultrasound waves to and from the subject". There is insufficient antecedent basis for this limitation in the claim. In particular, it is not clear what “the ultrasound data” is specifically referring to. In an interpretation the “the ultrasound data” may refer to “the learning ultrasound data” – which is obtained by “transmitting and receiving ultrasound waves” – as recited from the preamble; in another distinct interpretation the “the ultrasound data” may refer to the “input ultrasound data” also in the preamble; and in another interpretation may refer to new and distinct ‘ultrasound data’. It is suggested to amend the claim language to clearly define what the “the ultrasound data” is pointing to. For the purposes of examination, the broadest reasonable interpretation of the claim language – including the interpretations discussed above – is applied to limitation. Claim 2 recites the limitation “using an image forming model in which the specific parameter is used as a weight parameter, to form the ultrasound tomographic image of the inside of the specific region, which has an adjusted image quality” which renders the claim indefinite. The clause “to form the ultrasound tomographic image of the inside of the specific region” is unclear because the ‘specific region’ is a portion of the ‘tomographic image’ yet the claim language indicates the ‘specific region’ is the entire ‘tomographic image’. Furthermore, it is unclear whether the ‘adjusted image quality’ refers to the ‘inside of the specific region’ (as recited in claim 1), or is referring to the entirety of the ‘ultrasound tomographic image’, or if the ‘image forming model’/‘weight parameter’ are being modified. It is suggested to amend claim 2 to correct the grammar and to clearly point out the subject matter being claimed. Claim 3 recites the limitation “smoothly changing an image quality at a boundary between the inside of the specific region and the outside of the specific region” which renders the claim indefinite. There is insufficient antecedent basis for this limitation in the claim. It is unclear if the ‘boundary’ in claim 3 refers the ‘boundary’ in claim 1 or to a new, distinct ‘boundary’ of the ‘specific region’. It is suggested to amend the claim language to recite –smoothly changing an image quality at [[a]]the boundary between the inside of the specific region and the outside of the specific region– if this is the Applicant’s intended interpretation. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-8 is/are rejected under pre-AIA 35 U.S.C. 103 as being unpatentable over Tsymbalenko et al. (US20180160981A1, 2018-06-14; hereinafter “Tsymbalenko”) in view of Rao et al. (US20160350620A1, 2016-12-01; hereinafter “Rao”). Regarding claim 1, Tsymbalenko teaches an ultrasound diagnostic apparatus comprising one or more processors (“A method, comprising: in an medical imaging device” [clm 1]; “The medical imaging system 110 comprise suitable hardware, software, or a combination thereof, for supporting medical imaging—that is enabling obtaining data used in generating and/or rendering images during medical imaging exams. […] the medical imaging system 110 may be an ultrasound system, configured for generating and/or rendering ultrasound images.” [0030]; “Each computing system 120 may comprise suitable circuitry for processing, storing, and/or communication data.” [0033]; [0029-0085], [fig. 1-4]) and a learning model that has been trained to output an image adjustment parameter suitable for learning ultrasound data, from input ultrasound data, by using learning data including (a) the learning ultrasound data obtained by transmitting and receiving ultrasound waves to and from a subject and (b) a training image adjustment parameter to be used in image quality adjustment processing on the learning ultrasound data (“automatically identifying, during medical imaging based on a particular imaging technique, an anatomical feature in an area being imaged; automatically determining, based on said identifying of said anatomical feature, one or more imaging parameters or settings for optimizing imaging quality for said identified anatomical feature;” [clm 1]; “identifying said anatomical feature and determining said one or more imaging parameters or settings using a deep learning and/or neural network based model.” [clm 2]; “The deep learning and/or neural network based model may be pre-trained. In this regard, the pre-training may comprise determining (and storing) for each anatomical feature identification data (e.g., unique parameters and/or attributes that can be compared against during scans), and optimization data (the imaging parameters and/or settings resulting in optimal image quality, or data that enable determining such parameters and/or settings in real-time)” [0037]; Identification data for comparing attributes of anatomical features during scan (i.e., learning ultrasound data) and optimization data (i.e., training image adjustment parameter) are used for pre-training the deep learning/neural network model, wherein the trained model identifies anatomical features and also outputs imaging parameters for optimal images of the identified features [0029-0085], [fig. 1-4]), the ultrasound diagnostic apparatus including the processors performing a method comprising: determining a specific region which is a portion of an ultrasound tomographic image corresponding to the ultrasound data obtained by transmitting and receiving the ultrasound waves to and from the subject (“automatically identifying, during medical imaging based on a particular imaging technique, an anatomical feature in an area being imaged;” [clm 1]; “the medical imaging system 110 may be configured to use a deep learning and/or neural network based model to automatically (that is, without any or with very minimal input by the user) identify anatomical features in scanned areas” [0037]; “Acquired ultrasound scan data may be processed in real-time—e.g., during a B-mode scanning session, as the B-mode echo signals are received.” [0056]; “particular anatomical feature (e.g., liver, kidney, etc.) may be automatically identified (e.g., using deep learning or neural network based model) in an area being imaged.” [0066]; The neural network model automatically identifies anatomical features (i.e., specific region) within the ultrasound data (e.g., B-mode images) during an ultrasound imaging scan [0029-0085], [fig. 1-4]); determining a specific parameter, which is an image quality adjustment parameter for the specific region, based on output of the learning model when the ultrasound data of an inside of a boundary of the specific region is input to the trained learning model (“automatically determining, based on said identifying of said anatomical feature, one or more imaging parameters or settings for optimizing imaging quality for said identified anatomical feature;” [clm 1]; “image parameters optimization and settings may be continually set automatically to optimize imaging (e.g., ensure constant optimal image quality) […] Once the anatomical features are recognized the system can automatically switch and use the imaging parameters and/or setting most optimal for obtain the best image quality scan for recognized anatomical features.” [0036]; The deep learning/neural network model automatically selects imaging parameters to optimize image quality based on the identified anatomical feature [0029-0085], [fig. 1-4]); and executing image quality adjustment processing of adjusting an image quality of the inside of the boundary of the specific region of the ultrasound tomographic image on the ultrasound data of the inside of the specific region based on the specific parameter, and executing image quality adjustment processing of adjusting an image quality of an outside of the boundary of the specific region of the ultrasound tomographic image on the ultrasound data of the outside of the specific region by using a predetermined parameter for image quality adjustment different from the specific parameter (“configuring imaging functions in said medical imaging device based on said determined one or more imaging parameters or settings; acquiring based on said configuration, medical imaging dataset corresponding to said area being imaged; and” [clm 1]; “said deep learning and/or neural network based model is pre-trained for selecting, for each recognized anatomical feature, one or more imaging optimization parameters or settings” [clm 4]; Different imaging optimization parameters may be applied respectively to each anatomical feature (e.g., regions within and outside of a boundary) recognized within ultrasound image(s) [0029-0085], [fig. 1-4]). Although Tsymbalenko teaches all the limitations of claim 1 as shown above, if in an interpretation, one argues (or interprets differently) that Tsymbalenko does not teach the boundary of the specific region, the following reference may be applied to supplement the teachings above. In the same field of endeavor, Rao teaches an ultrasound diagnostic apparatus comprising one or more processors and a learning model that has been trained (“In a non-transitory computer readable storage medium having stored therein data representing instructions executable by a programmed processor for image enhancement in medical diagnostic ultrasound,” [clm 11]; “wherein classifying comprises classifying with a machine-learnt classifier, the machine-learnt classifier learnt from the database.” [clm 13]; “A processor learns the classification based on the ground truth and features extracted from the images of the knowledge base. Through one or more various machine-learning processes, the classifier is trained to detect locations of the anatomy and/or artifact.” [0028]; “The system 10 is a medical diagnostic ultrasound imaging system.” [0055]; [0053-0066], [fig. 1-6]), the ultrasound diagnostic apparatus including the processors performing a method comprising: determining a specific region which is a portion of an ultrasound tomographic image corresponding to the ultrasound data obtained by transmitting and receiving the ultrasound waves to and from the subject (“receiving, from an ultrasound scanner, detected ultrasound data representing a patient; classifying locations represented by the detected ultrasound data, the classifying being with a knowledge base; ” [clm 11]; “The ultrasound system receives the detected ultrasound data, such as receiving B-mode data, as an output from the detector. A processor of the scanner or a remote processor not part of the scanner receives the detected ultrasound data for knowledge-based detection.” [0023]; “A processor classifies different locations represented by the ultrasound data as belonging to a class or not. Other classifiers than binary classifiers may be used, such as classifying each location as being a member of one of three or more classes (e.g., (1) background, artifact, and anatomy; (2) fluid, bone, and tissue; or (3) organ of interest, other organ, and non-determinative).” [0024]; “The processor of the ultrasound system or other processor applies the classifier to the received ultrasound data to determine locations of the anatomy and/or artifact.” [0033]; The processor receives ultrasound data and extracts input features from the ultrasound data, wherein the processor may subsequently apply a model to determine locations of anatomical structures and/or artifacts (i.e., specific region) within the ultrasound data [0018-0040, 0053-0066], [fig. 1-6]); determining a specific parameter, which is an image quality adjustment parameter for the specific region, based on output of the learning model when the ultrasound data of an inside of a boundary of the specific region is input to the trained learning model (“The machine-learnt classifier is learnt from the database of annotated images. The annotated or ground-truth labeled images are used as training data. A processor learns the classification based on the ground truth and features extracted from the images of the knowledge base.” [0028]; “The machine learning provides a matrix or other output. The matrix is derived from analysis of the database of training data with known results. […] The matrix associates input features with outcomes, providing a model for classifying.” [0031]; “Different classifiers are trained for different artifacts and/or anatomy. The same or different classifiers may be trained for different imaging situations, […] The same or different classifiers may be trained to detect different objects, such as one classifier for detecting an artifact and another classifier for detecting anatomy.” [0039]; “In act 20, the processor, filter, ultrasound system, or combinations thereof enhances the detected ultrasound data as a function of the classification of the locations” [0041]; “Edge detection, spatial filtering, temporal filtering, transformation, or other image process may vary as a function of location of the anatomy and/or artifact identified by the classifier.” [0043]; Machine learnt classifiers are trained using a knowledge base to distinguish locations of artifacts from anatomical structures in input ultrasound data, wherein the processor enhances the detected ultrasound data by applying an image processing algorithm (i.e., specific parameter) to locations of the artifact (e.g., high pass filter, amplitude scaling, etc.) and does not apply the image processing algorithm to the other locations in the ultrasound data [0025-0066], [fig. 1-6]); and executing image quality adjustment processing of adjusting an image quality of the inside of the boundary of the specific region of the ultrasound tomographic image on the ultrasound data of the inside of the specific region based on the specific parameter, and executing image quality adjustment processing of adjusting an image quality of an outside of the boundary of the specific region of the ultrasound tomographic image on the ultrasound data of the outside of the specific region by using a predetermined parameter for image quality adjustment different from the specific parameter (“enhancing the detected ultrasound data as a function of the classification of the locations, the enhancing changing amplitude of the ultrasound data for some of the locations relative to other locations while maintaining representation of all of the locations; and generating an image from the enhanced ultrasound data.” [clm 11]; “Using the knowledge base detection identifies the locations to which the different image processing (e.g., low pass filtering along and high pass filtering perpendicular to an edge) is applied. Edges of artifacts or other anatomy are not enhanced as much, in the same way, or at all. […] Similarly, image processing to reduce or remove artifacts is applied just to artifact locations rather than all locations with similar statistical properties as the artifact.” [0042]; “the enhancement is through removal or reduction of an artifact. For example, a high pass filter or amplitude scaling (e.g., reduction by an amount or %) is applied to locations associated with an artifact and not applied to or applied differently to other locations represented by the ultrasound data.” [0044]; The processor enhances locations of the ultrasound data to suppress artifacts (i.e., within a specific region) through selective application of high pass filter and amplitude scaling (i.e., specific parameter) and does not apply the same enhancement (i.e., predetermined parameter) to locations that have not been classified as artifacts (i.e., outside a specific region) [0025-0066], [fig. 1-6; see fig. 4-5 reproduced below]). PNG media_image1.png 366 1026 media_image1.png Greyscale Amplitude scaling (e.g., reduction) or greater temporal persistence is applied to locations classified as spontaneous contrast, while different scaling (e.g., lesser), no scaling, or different persistence is applied to other locations (Rao [fig. 4, 5]) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the method performed by an ultrasound diagnostic apparatus and processors including a learning model trained using learning ultrasound data and a training image adjustment parameter, as taught by Tsymbalenko, by executing image quality adjustment processing relative to the boundary of a specific region as taught by Rao. Various issues may exist with conventional approaches for optimizing medical imaging. In this regard, conventional systems and methods, if any existed, for optimizing image quality during medical imaging operations, can be inefficient and/or ineffective (Tsymbalenko [0004]). In many cases, imaging artifacts have the same or similar properties to anatomic structures or tissue and hence are not detectable and effectively segmentable by image processing algorithms. Techniques that are more complex rely on standard image analysis, such as gradient, variance, or simply amplitude-based image segmentation to process selectively various parts of the image. While these techniques work on simpler images, images with artifacts (e.g., clutter, side lobes, grating lobes, or rib shadows) or other anatomy with similar properties may not respond as desired to the complex techniques (Rao [0003]). Utilizing knowledge-based detection algorithms to detect artifacts, such as grating lobes, side lobes from rib reflections, rib shadows, other shadows, or spontaneous contrast, provides focused image enhancement. This knowledge significantly improves artifact detection and minimization (Rao [0017]). The suppression of the knowledge base detected artifact improves aesthetics and/or diagnostic utility of the image (Rao [0046]). Regarding claim 2, Tsymbalenko and Rao teach the ultrasound diagnostic apparatus according to claim 1, Rao further teaches wherein the method performed by the ultrasound diagnostic apparatus further comprises: using an image forming model in which the specific parameter is used as a weight parameter, to form the ultrasound tomographic image of the inside of the specific region, which has an adjusted image quality (“The machine learning provides a matrix or other output. The matrix is derived from analysis of the database of training data with known results. The machine-learning algorithm determines the relationship of different inputs to the result. […] The matrix associates input features with outcomes, providing a model for classifying. Machine training provides relationships using one or more input variables with outcome, allowing for verification or creation of interrelationships not easily performed manually.” [0031]; “The model represents a probability of a location represented by ultrasound data being of the class or not. This probability is a likelihood of membership in the class.” [0032]; “spatially adaptive filtering is applied. One or more characteristics of the filter adapt to the classification of the locations. The spatial filter kernel (e.g., size and/or weights) or type of filtering varies depending on the classification of the location being filtered. Anatomy or a border of anatomy may be enhanced for more or less filtering as compared to other locations.” [0043]; The machine learning matrix provides a model for classifying ultrasound data and determining enhancement, wherein the enhancement may apply a spatial filter kernel (i.e., weights) to artifacts (i.e., inside the specific region) and anatomical structures to reconstruct ultrasound images [0025-0066], [fig. 1-6], [see claim 1 rejection]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the method performed by an ultrasound diagnostic apparatus and processors including a learning model trained using learning ultrasound data and a training image adjustment parameter, as taught by Tsymbalenko, by executing image quality adjustment processing relative to the boundary of a specific region as taught by Rao. In many cases, imaging artifacts have the same or similar properties to anatomic structures or tissue and hence are not detectable and effectively segmentable by image processing algorithms. Techniques that are more complex rely on standard image analysis, such as gradient, variance, or simply amplitude-based image segmentation to process selectively various parts of the image. While these techniques work on simpler images, images with artifacts (e.g., clutter, side lobes, grating lobes, or rib shadows) or other anatomy with similar properties may not respond as desired to the complex techniques (Rao [0003]). Utilizing knowledge-based detection algorithms to detect artifacts, such as grating lobes, side lobes from rib reflections, rib shadows, other shadows, or spontaneous contrast, provides focused image enhancement. This knowledge significantly improves artifact detection and minimization (Rao [0017]). Regarding claim 3, Tsymbalenko and Rao teach the ultrasound diagnostic apparatus according to claim 1, Rao further teaching wherein the method performed by the ultrasound diagnostic apparatus further comprises: executing image quality smoothing processing of smoothly changing an image quality at a boundary between the inside of the specific region and the outside of the specific region (“locating gradients as an indication of an edge for filtering differently along the edge relies on statistics that certain gradients are edges. […] Using the knowledge base detection identifies the locations to which the different image processing (e.g., low pass filtering along and high pass filtering perpendicular to an edge) is applied. Edges of artifacts or other anatomy are not enhanced as much, in the same way, or at all.” [0042]; Low pass filtering is smoothing processing and may be applied to edges of the artifacts and anatomy [0025-0066], [fig. 1-6], [see claim 1 rejection]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the method performed by an ultrasound diagnostic apparatus and processors including a learning model trained using learning ultrasound data and a training image adjustment parameter, as taught by Tsymbalenko, by executing image quality adjustment processing relative to the boundary of a specific region as taught by Rao. In many cases, imaging artifacts have the same or similar properties to anatomic structures or tissue and hence are not detectable and effectively segmentable by image processing algorithms. Techniques that are more complex rely on standard image analysis, such as gradient, variance, or simply amplitude-based image segmentation to process selectively various parts of the image. While these techniques work on simpler images, images with artifacts (e.g., clutter, side lobes, grating lobes, or rib shadows) or other anatomy with similar properties may not respond as desired to the complex techniques (Rao [0003]). Utilizing knowledge-based detection algorithms to detect artifacts, such as grating lobes, side lobes from rib reflections, rib shadows, other shadows, or spontaneous contrast, provides focused image enhancement. This knowledge significantly improves artifact detection and minimization (Rao [0017]). Regarding claim 4, Tsymbalenko and Rao teach the ultrasound diagnostic apparatus according to claim 1, wherein the method performed by the ultrasound diagnostic apparatus further comprises: displaying (i) a first ultrasound tomographic image in which the image quality adjustment processing based on the specific parameter is executed on the inside of the specific region and the image quality adjustment processing based on the predetermined parameter is executed on the outside of the specific region, and (ii) a second ultrasound tomographic image in which the image quality adjustment processing based on the predetermined parameter is executed on an entire region (“enhancing the detected ultrasound data as a function of the classification of the locations, the enhancing changing amplitude of the ultrasound data for some of the locations relative to other locations while maintaining representation of all of the locations; and generating an image from the enhanced ultrasound data.” [clm 11]; “Using the knowledge base detection identifies the locations to which the different image processing (e.g., low pass filtering along and high pass filtering perpendicular to an edge) is applied. Edges of artifacts or other anatomy are not enhanced as much, in the same way, or at all. […] Similarly, image processing to reduce or remove artifacts is applied just to artifact locations rather than all locations with similar statistical properties as the artifact.” [0042]; “the enhancement is through removal or reduction of an artifact. For example, a high pass filter or amplitude scaling (e.g., reduction by an amount or %) is applied to locations associated with an artifact and not applied to or applied differently to other locations represented by the ultrasound data.” [0044]; “The display 60 is configured to display an image representing the scanned region of the patient, such as a B-mode image. The image is generated from the image processed detected data. After the adaptive image processing is applied, an image is generated and displayed on the display 60. The image represents the scan region, but has intensities or estimated values that are altered to enhance or suppress based on the detected locations.” [0065]; A display may present B-mode images based on ultrasound data before enhancement (i.e., second ultrasound tomographic image), and B-mode images which have been enhanced (i.e., first ultrasound tomographic image) to suppress artifacts through selective application of high pass filter and/or amplitude scaling [0025-0066], [fig. 1-6], [see claim 1 rejection]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the method performed by an ultrasound diagnostic apparatus and processors including a learning model trained using learning ultrasound data and a training image adjustment parameter, as taught by Tsymbalenko, by executing image quality adjustment processing relative to the boundary of a specific region as taught by Rao. In many cases, imaging artifacts have the same or similar properties to anatomic structures or tissue and hence are not detectable and effectively segmentable by image processing algorithms. Techniques that are more complex rely on standard image analysis, such as gradient, variance, or simply amplitude-based image segmentation to process selectively various parts of the image. While these techniques work on simpler images, images with artifacts (e.g., clutter, side lobes, grating lobes, or rib shadows) or other anatomy with similar properties may not respond as desired to the complex techniques (Rao [0003]). Utilizing knowledge-based detection algorithms to detect artifacts, such as grating lobes, side lobes from rib reflections, rib shadows, other shadows, or spontaneous contrast, provides focused image enhancement. This knowledge significantly improves artifact detection and minimization (Rao [0017]). Regarding claim 5, Tsymbalenko and Rao teach the ultrasound diagnostic apparatus according to claim 1, Tsymbalenko further teaching wherein the specific region is determined in accordance with an indication from a user of the ultrasound diagnostic apparatus (“wherein said deep learning and/or neural network based model is generated and/or updated based on feedback data from one or more users, said feedback data relating to recognizing and/or optimizing imaging for particular anatomical features” [clm 5]; “comprising configuring handling of user input and/or output, during said medical imaging, based on said identifying of said anatomical feature.” [clm 8]; “the automatic identification and optimizing imaging of anatomical features functions (e.g., the deep learning and/or neural network based models) may be generated, updated, and revised based on data obtained from particular users.” [0044]; Automatic identification and optimizing imaging of anatomical features may be continually updated and revised based on user feedback [0029-0085], [fig. 1-4]). Regarding claim 6, Tsymbalenko and Rao teach the ultrasound diagnostic apparatus according to claim 1, Tsymbalenko further teaching wherein the image quality adjustment processing of adjusting the image quality of the inside of the specific region of the ultrasound tomographic image is performed in accordance with an image quality adjustment policy corresponding to an indication from a user of the ultrasound diagnostic apparatus (“said deep learning and/or neural network based model is generated and/or updated based on feedback data from one or more users, said feedback data relating to recognizing and/or optimizing imaging for particular anatomical features.” [clm 5]; “the automatic identification and optimizing imaging of anatomical features function (e.g., the deep learning and/or neural network based model) may be continually updated and revised. For example, based on user feedback (including, e.g., any adjustments to settings selected based on the current model), the deep learning and/or neural network based model may be updated—e.g., to ensure that the imaging settings are optimal for the user.” [0040]; “a number users may be selected (e.g., being deemed “experts”) and data obtained from those users (e.g., generated images and/or datasets corresponding thereto) may be used in generating the automatic identification and optimizing imaging of anatomical features functions. Thus, data used in recognizing anatomical features and/or optimizing imaging of these anatomical features may be, for example, set and/or updated based on the data used by those users in obtaining their images (e.g., settings and/or parameters used by those users when anatomical features are focused on, on and/or when images of the anatomical features are deemed of optimal quality).” [0044]; User feedback from ‘experts’ (i.e., an image quality adjustment policy) may be selectively used by the model to refine the identification and image optimization parameters [0029-0085], [fig. 1-4]). Regarding claim 7, Tsymbalenko and Rao teach the ultrasound diagnostic apparatus according to claim 6, Tsymbalenko further teaching wherein the method performed by the ultrasound diagnostic apparatus further comprises: storing a combination of user identification information for identifying the user of the ultrasound diagnostic apparatus, a part of the subject included in the specific region decided in accordance with an indication from the user, and the image quality adjustment policy indicated by the user for the specific region, in a memory (“The deep learning and/or neural network based model is configured and/or updated based on feedback data from one or more users, the feedback data relating to recognizing and/or optimizing imaging for particular anatomical features.” [0019]; “Each computing system 120 may comprise suitable circuitry for processing, storing, and/or communication data.” [0033]; “the pre-training may comprise determining (and storing) for each anatomical feature identification data (e.g., unique parameters and/or attributes that can be compared against during scans), and optimization data” [0037]; “the models may be stored into suitable machine readable media (e.g., flash card, etc.), which are then used to load the models into the medical imaging systems 110 (on-site, such as by users of the systems or authorized personnel)” [0043]; The optimized imaging of anatomical features corresponding to a particular user may may be stored in memory and used during an imaging procedure conducted by the particular user [0029-0085], [fig. 1-4], [see claim 1, 6 rejections]), and wherein the image quality adjustment processing of adjusting the image quality of the inside of the specific region of the ultrasound tomographic image is performed in accordance with the user identification information of the user and the image quality adjustment policy associated with the part and stored in the memory, and the specific region including the part is determined in accordance with the indication from the user (“configuring handling of user input and/or output, during said medical imaging, based on said identifying of said anatomical feature.” [clm 8]; “The user input may be directed to controlling display of images, selecting settings, specifying user preferences, requesting feedback, etc.” [0032]; “once anatomical features are identified, and images optimizing scanning of such anatomical features are obtained and rendered, medical imaging systems (e.g., the medical imaging system 110) may be configured to enable the user to interact with the system based on the particular anatomical features.” [0039]; “different versions of the automatic identification and optimizing imaging of anatomical features function, corresponding to different users, may be maintained and used during imaging.” [0040]; User input may be received during an imaging procedure based on identified anatomical features to optimize imaging using stored model (as well as identification and optimization data) [0029-0085], [fig. 1-4], [see claim 1, 6 rejections]). Regarding claim 8, Tsymbalenko and Rao teach the ultrasound diagnostic apparatus according to claim 1, Rao further teaching wherein the image quality adjustment parameter is at least one selected from an adjustment parameter for adjusting a dynamic range, a parameter for adjusting contrast, a parameter related to a time gain compensation (TGC), a parameter related to filter processing for reducing an artifact, or a parameter for adjusting brightness/ (“wherein classifying comprises detecting an artifact, and wherein enhancing comprises reducing the amplitude for the location of the artifact.” [clm 15]; “The processor of the ultrasound system or other processor applies the classifier to the received ultrasound data to determine locations of the anatomy and/or artifact.” [0033]; “image processing to reduce or remove artifacts is applied just to artifact locations rather than all locations with similar statistical properties as the artifact.” [0042]; [0025-0066], [fig. 1-6], [see claim 1 rejection]) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the method performed by an ultrasound diagnostic apparatus and processors including a learning model trained using learning ultrasound data and a training image adjustment parameter, as taught by Tsymbalenko, by executing image quality adjustment processing relative to the boundary of a specific region as taught by Rao. In many cases, imaging artifacts have the same or similar properties to anatomic structures or tissue and hence are not detectable and effectively segmentable by image processing algorithms. Techniques that are more complex rely on standard image analysis, such as gradient, variance, or simply amplitude-based image segmentation to process selectively various parts of the image. While these techniques work on simpler images, images with artifacts (e.g., clutter, side lobes, grating lobes, or rib shadows) or other anatomy with similar properties may not respond as desired to the complex techniques (Rao [0003]). Utilizing knowledge-based detection algorithms to detect artifacts, such as grating lobes, side lobes from rib reflections, rib shadows, other shadows, or spontaneous contrast, provides focused image enhancement. This knowledge significantly improves artifact detection and minimization (Rao [0017]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to James F. McDonald III whose telephone number is (571)272-7296. The examiner can normally be reached M-F; 8AM-6PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at 5712727230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JAMES FRANKLIN MCDONALD III Examiner Art Unit 3797 /CHRISTOPHER KOHARSKI/Supervisory Patent Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Apr 24, 2024
Application Filed
Oct 03, 2025
Non-Final Rejection — §102, §103, §112
Jan 05, 2026
Response Filed
Mar 17, 2026
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588809
Systems and Methods for Determining Tissue Inflammation Levels of the Eye from Blood Vessel Characteristics
2y 5m to grant Granted Mar 31, 2026
Patent 12582378
METHODS AND SYSTEMS FOR AN INVASIVE DEPLOYABLE DEVICE USING A SHAPE MEMORY MATERIAL TO RECONFIGURE TRANSDUCER ELEMENTS IN RESPONSE TO STIMULI
2y 5m to grant Granted Mar 24, 2026
Patent 12564388
Phase Change Insert for Ultrasound Imaging Probe
2y 5m to grant Granted Mar 03, 2026
Patent 12544003
SYSTEM, METHOD, AND APPARATUS FOR TEMPERATURE ASYMMETRY MEASUREMENT OF BODY PARTS
2y 5m to grant Granted Feb 10, 2026
Patent 12527542
ULTRASOUND IMAGING APPARATUS FOR BIPLANE IMAGING AND CONTROL METHOD THEREOF
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
99%
With Interview (+44.3%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 76 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month