DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP 2022-105151, filed on 06/29/2022.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/28/2025 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Specification
The disclosure is objected to because of the following informalities:
[0016]: As written it reads “An 11th aspect of the technology of the present disclosure is the diagnostic assistance apparatus according to the 10th aspect, wherein the first image mode is THI mode, CH mode, or CHI mode, and the second image mode is Doppler mode or elastography mode”. However, this is the first indication of the acronyms “THI”, “CH” and “CHI” in the specification, therefore, the terms should be spelled out to provide clarity.
[0020]: As written it reads “A 15th aspect of the technology of the present disclosure is the diagnostic assistance apparatus according to any one of the first to 14th aspects, wherein the processor is configured to detect a specific area from the first ultrasound image by an AI approach”. However, this is the first indication of the acronyms “AI” in the specification, therefore, the term should be spelled out to provide clarity.
[0149]: As written it reads “Also, according to the fifth modification, the multiple parameters 94 are stored in the NVM 66, and the operating mode is switches between detection mode and non-detection mode by the control unit 62C according to the multiple parameters 94 in the NVM 66”. However, to correct the typo “switches” should be “switched”.
[0177]: As written it reads “A second example is to use processor in which the functions of the entire system, including multiple hardware resources to execute the diagnostic assistance processing, are realized by a single IC chip, as typified by an SoC”. However, this is the first indication of the acronym “IC” in the specification, therefore, the term should be spelled out to provide clarity.
Appropriate correction is required.
Claim Objections
Claims 11 and 15 are objected to because of the following informalities:
Regarding claim 11, the claim reads “wherein; the first image mode is THI mode, CH mode or CHI mode, and the second image mode is Doppler mode or elastography mode”. However, this is the first indication of the acronyms “THI”, “CH” and “CHI” in the claims, therefore, the terms should be spelled out to provide clarity.
Regarding claim 15, the claim reads “wherein the processor is configured to detect a specific area from the first ultrasound image by an AI approach”. However, this is the first indication of the acronyms “AI” in the claims, therefore, the term should be spelled out to provide clarity. Furthermore, in order to maintain proper antecedent basis, the examiner believes that “a specific area” should be “the specific area”.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: ultrasound module in claims 1, 12-13, and 18-21.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. That being said, the ultrasound module is described in the specification when it states “wherein the ultrasound module is an ultrasound endoscope” [0023]; “The ultrasound endoscope 12 is an example of an "ultrasound module" and an "ultrasound endoscope" according to the technology of the present disclosure” [0031]. Therefore, the examiner is interpreting the ultrasound module to be an ultrasound endoscope.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5, 10, 12-13, 15-16, and 18-21 is/are rejected under 35 U.S.C. 103 as being unpatentable by Kitamura et al. US 2020/0126223 A1 “Kitamura” and further in view of Satoh et al. US 2020/0000439 A1 “Satoh”.
Regarding claims 1, 20 and 21, Kitamura teaches “A diagnostic assistance apparatus comprising:” (Claim 1) (“FIG. 1 is a block diagram illustrating a configuration example of an endoscope diagnosis support system 1 according to a first embodiment of the present invention” [0022]; “The endoscope diagnosis support system 1 includes a light source drive unit 11, an endoscope 21, a video processor 31, a display unit 41, and the operation unit X” [0023]. Therefore, the endoscope diagnosis support system 1 in FIG. 1 of Kitamura represents a diagnostic assistance apparatus.);
“a processor, wherein the processor is configured to:” (Claim 1) (“The video processor 31 performs control on the endoscope 21, generates an endoscope image A based on the image pickup signal inputted from the endoscope 21, and generates a display image B based on the endoscope image A. The video processor 31 includes the control unit 32, an anomaly detection unit 33, and an image generation unit 34” [0029]. Therefore, the diagnostic assistance apparatus includes a processor (i.e. video processor 31).);
“A diagnostic assistance method comprising:” (Claim 20) (“An endoscope diagnosis support method according to an embodiment includes performing detection of an anomaly candidate area from an endoscope image obtained by performing image pickup of an inside of a subject to obtain a detection result, and generating a display image in which an indicator indicating detection of the anomaly candidate area is arranged in a periphery portion of the endoscope image in accordance with the detection result” [0006]; “FIG. 3 is a flowchart illustrating an example of the display image generation processing of the endoscope diagnosis support system 1 according to the first embodiment of the present invention” [0046]. Therefore, FIG. 3 of Kitamura represents a diagnostic assistance method.);
“A non-transitory computer-readable storage medium storing a program executable by a computer to execute a process comprising:” (Claim 21) (“A non-transitory storage medium according to an embodiment stores a computer-readable program. The program causes a computer to execute code for performing detection of an anomaly candidate area from an endoscope image obtained by performing image pickup of an inside of a subject to obtain a detection result, and code for generating a display image in which an indicator indicating detection of the anomaly candidate area is arranged in a periphery portion of the endoscope image in accordance with the detection result” [0005]. Therefore, Kitamura discloses a non-transitory computer-readable storage medium storing a program executable by a computer to execute a process comprising multiple steps.);
“acquire(ing) a first […] image which is generated by an […] module and which shows a target area of observation;” (Claims 1, 20 and 21) (“FIG. 2 is a diagram illustrating a configuration example of the display image B of the display unit 41 of the endoscope diagnosis support system 1 according to the first embodiment of the present invention. In the example of FIG. 2, entire shapes of endoscope images A1 and A2 are octagonal, and a lumen in a living body is schematically represented by curved lines” [0041]. In order for the display image B to be displayed on the display unit 41, an ultrasound module has to be present to generate the display image B. Therefore, the method carried out by the diagnostic assistance system involves acquiring a first ultrasound image (see FIG. 2) which is generated by an ultrasound module and which shows a target area of observation (i.e. lumen).); and
“switch(ing) between a first operating mode and a second operating mode according to whether or not reference information referenced to diagnose the target area of observation is combined with the first […] image, according to whether or not the first […] image is an image obtained in an auxiliary image mode, which is an image mode other than a main image mode, or according to a set value that stipulates the image quality of the first […] image” (Claims 1, 20 and 21) (“More specifically, when the observation mode is a normal light mode, the light source drive unit 11 emits the normal light from the illumination portion 23, and when the observation mode is a narrow band light observation mode, the light source drive unit 11 emits the narrow band light from the illumination portion 23” [0024]; “The control unit 32 transmits a control signal to the light source drive unit 11 and drives the illumination portion 23 in accordance with the observation mode. The observation mode is set by an instruction input of a user via the operation unit X” [0030]; “The anomaly detection unit 33 is connected to the image generation unit 34. When the anomaly candidate area L is not detected, the image generation unit 34 outputs a detection result indicating non-detection of the anomaly candidate area L to the image generation unit 34. When the anomaly candidate area L is detected, the anomaly detection unit 33 outputs a detection result indicating a detection position and a size of the anomaly candidate area L to the image generation unit 34” [0031]; “The image generation unit 34 sets non-display of the detection position image D1 in the main area B1 such that the user's attention to the endoscope image A1 is not disturbed when the observation mode is switched from a normal observation mode to a narrow band light mode” [0093].
In this case, the normal observation mode represents the main image mode and the narrow band light mode represents an auxiliary image mode which is different than the main image mode. As shown in FIG. 2, the images B1 and B2 do not contain reference information referenced to diagnose the target area of observation (i.e. lumen containing a lesion L). Additionally, as shown in FIG. 5, the images B1 and B2 include detection position images D1 and D2 which represent reference information referenced to diagnose the target area of observation (i.e. a lesion L within the lumen, [0041]). Therefore, the method carried out by the diagnostic assistance system involves switching between a first operating mode and a second operating mode (i.e. normal observation mode and narrow band light mode, see [0093]) according to whether or not reference information referenced to diagnose the target area of observation (i.e. detection position images D1 and D2) is combined with the first ultrasound image (i.e. FIG. 2 = reference information not combined with the first ultrasound image; FIG. 5 = reference information is combined with the first ultrasound image), according to whether or not the first ultrasound image is an image obtained in an auxiliary image mode (i.e. narrow band light mode, for example, where the mode shows the anomaly detection area L, see FIG. 5), which is an image mode other than a main image mode (i.e. normal image mode, where the mode does not show the anomaly detection area L, see FIG. 2).);
“wherein: the first operating mode is an operating mode that performs detection of a specific area from the first […] image on the basis of detection assistance information created using a second […] image obtained in the main image mode” (Claims 1, 20 and 21) (“The processor performs detection of an anomaly candidate area from an endoscope image obtained by performing image pickup of an inside of a subject to obtain a detection result, and generates a display image in which an indicator indicating detection of the anomaly candidate area is arranged in a periphery portion of the endoscope image in accordance with the detection result” [0004]; “For example, the anomaly detection unit 33 is configured by a computing apparatus using an artificial intelligence technology such as machine learning” [0032]; “More specifically, the anomaly detection unit 33 is configured by a computing apparatus that learns extraction of a feature value by a deep learning technology” [0033]. Therefore, since the anomaly detection unit 33 is configured to detects that the anomaly candidate area L is present and outputs a detection result indicating the position and side of the anomaly candidate area L (See FIG. 5), the anomaly detection unit 33 operates in a first operating mode which is an operating mode that performs detection of a specific area (i.e. anomaly candidate area) from the first ultrasound image on the basis of detection assistance information (i.e. artificial intelligence technology) created using a second ultrasound image obtained in the main image mode (i.e. normal observation mode).); and
“the second operating mode is an operating mode that performs detection of the specific area but does not output a detection result, or that does not perform detection of the specific area” (Claims 1 and 20) (See [0031] above. In this case, the since the anomaly detection unit 33 detects when the anomaly candidate area L is not present and outputs a detection result indicating non-detection of the anomaly candidate area L (i.e. images B1 and B2 in FIG. 2), the anomaly detection unit 33 operates in a second operating mode, wherein the second operating mode is an operating mode that performs detection of the specific area but does not output a detection result (i.e. anomaly candidate area L).).
Although Kitamura includes an endoscope 21 which is configured such that image pickup of an inside of the subject can be performed (see [0025]), Kitamura does not explicitly teach that the endoscope acquires a first “ultrasound image which is generated by an ultrasound module” or “a second ultrasound image”.
Satoh is within the same field of endeavor as the claimed invention because it involves an ultrasound diagnostic apparatus which includes an ultrasound endoscope (See ultrasound endoscope 12 in FIG. 1).
Satoh teaches that the endoscope acquires a first “ultrasound image which is generated by an ultrasound module” and “a second ultrasound image” (“The ultrasound endoscope 12 is an endoscope, and as shown in FIG. 1, has the insertion part 22 to be inserted into the body cavity of a patient and an operation unit 24 operated by an operator (user), such as a doctor or a technician” [0040]; “By the function of the ultrasound endoscope 12, the operator can acquire an endoscope image of the inner wall of the body cavity of the patient and an ultrasound image of the observation target part” [0041]. Therefore, Satoh discloses that the endoscope is an ultrasound endoscope which acquires a first ultrasound image (i.e. of the inner wall of a body cavity for example, see [0041]) which is generated by an ultrasound module (i.e. ultrasound endoscope, see Applicant’s paragraph [0023]) and a second ultrasound image (i.e. ultrasound image of the observation target part, for example, see [0041]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus, method and non-transitory computer-readable storage medium of Kitamura such that it acquires first and second ultrasound images with an ultrasound endoscope as disclosed in Satoh in order to assess characteristics of the tissues being examined. An ultrasound endoscope is one of a finite number of devices which can be used to obtain ultrasound images to diagnose a target area with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus, method and non-transitory computer-readable storage medium of Kitamura such that it acquires first and second ultrasound images with an ultrasound endoscope as disclosed in Satoh would yield the predictable result of acquiring ultrasound images with which to assess a patient.
Regarding claim 2, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Kitamura further teaches “wherein: the first operating mode is an operating mode used in a case in which the reference information is not combined with the first […] image, and the second operating mode is an operating mode used in a case in which the reference information is combined with the first […] image” (See Kitamura: [0031] as discussed with respect to claim 1 above. As shown in FIG. 2, the images B1 and B2 do not include reference information to diagnose the target area of observation (i.e. anomaly detection candidate area L). Therefore, the images shown in FIG. 2 represent images which were obtained in the first operating mode which is an operating mode used in a case in which the reference information is not combined with the first image. Furthermore, as shown in FIG. 5, the images B1 and B2 include reference information (i.e. boxes D1 and 2D, for example) to diagnose the target area of observation (i.e. anomaly detection candidate area L). Therefore, the images shown in FIG. 5 represent images which were obtained in the second operating mode which is an operating mode used in a case in which the reference information (i.e. anomaly detection candidate area L) is combined with the first image.).
Satoh teaches that the first image is an “ultrasound” image (See Satoh: [0041] as discussed with respect to claim 1 above.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that it acquires ultrasound images with an ultrasound endoscope as disclosed in Satoh in order to assess characteristics of the tissues being examined. An ultrasound endoscope is one of a finite number of devices which can be used to obtain ultrasound images to diagnose a target area with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that it acquires ultrasound images with an ultrasound endoscope as disclosed in Satoh would yield the predictable result of acquiring ultrasound images with which to assess a patient.
Regarding claim 3, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Kitamura further teaches “wherein: the first operating mode is an operating mode used in a case in which the first […] image is not an image obtained in the auxiliary image mode, and the second operating mode is an operating mode used in a case in which the first […] image is an image obtained in the auxiliary image mode” (See Kitamura: [0031]. According to the Applicant’s specification, “Doppler mode is an example of an “auxiliary image mode, which is an image mode other than the main image mode”” [0041] and “Doppler mode is an image mode in which hemodynamics identified using the Doppler effect are superimposed onto a B-mode image as color information” [0042]. Therefore, as understood by the examiner an image produced using an “auxiliary image mode” is an image in which additional information is superimposed thereon.
As shown in FIG. 2, the images B1 and B2 do not include reference information to diagnose the target area of observation (i.e. anomaly detection candidate area L). Therefore, the images shown in FIG. 2 represent images which were obtained in the first operating mode which is an operating mode used in a case in which the first image is not an image obtained in the auxiliary image mode. Furthermore, as shown in FIG. 5, the images B1 and B2 include reference information (i.e. boxes D1 and D2, for example) to diagnose the target area of observation (i.e. anomaly detection candidate area L). Therefore, the images shown in FIG. 5 represent images which were obtained in the second operating mode which is an operating mode used in a case in which the first image is an image obtained in the auxiliary image mode (i.e. to show the anomaly detection candidate area L).
Satoh teaches that the first image is an “ultrasound” image (See Satoh: [0041] as discussed with respect to claim 1 above.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that it acquires ultrasound images with an ultrasound endoscope as disclosed in Satoh in order to assess characteristics of the tissues being examined. An ultrasound endoscope is one of a finite number of devices which can be used to obtain ultrasound images to diagnose a target area with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that it acquires ultrasound images with an ultrasound endoscope as disclosed in Satoh would yield the predictable result of acquiring ultrasound images with which to assess a patient.
Regarding claim 5, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Satoh further teaches “wherein: the reference information includes color information expressing characteristics in the target area of observation as colors” (“The CF mode is a mode in which average blood flow speed, flow fluctuation, strength of flow signal, flow power, and the like are mapped to various colors and displayed so as to be superimposed on a B mode image” [0050]; “The CF mode image generation unit 166 generates an image showing blood flow information in a predetermined direction. […] Thereafter, the CF mode image generation unit 166 generates a CF mode image (image signal) as a color image on which the blood flow information is superimposed by including the image signal in the B mode image signal” [0107]. Therefore, the CF mode image generation unit 166 generates reference information which includes color information expressing characteristics in the target area (i.e. blood flow information) of observation as colors.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that the reference information includes color information expressing characteristics in the target area of observation as colors as disclosed in Satoh in order to allow the user easily observe characteristics of the tissues included within an image. Superimposing color information relating to blood flow through a vessel is one of a finite number of techniques which can be used to allow a user to assess characteristics of a patient’s vasculature with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that the reference information includes color information expressing characteristics in the target area of observation as colors as disclosed in Satoh would yield the predictable result of allowing a user to easily observe characteristics of a tissue, such as blood flow through vessels, when assessing a patient.
Regarding claim 10, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Satoh further teaches “wherein: the auxiliary image mode is a first image mode that generates an ultrasound image using a high-frequency component included in a reflected wave obtained in a case in which an ultrasonic wave is emitted toward the target area of observation and then reflected by the target area of observation, or a second image mode that combines a B-mode image with a separate image” (See Satoh: [0050] and [0107] as discussed with respect to claim 5 above. In this case, since the CF mode image is a color image in which the blood flow information is superimposed on the B-mode image, this superimposed image represent an image which is produced with the auxiliary image mode being a second image mode that combines a B-mode image with a separate image (i.e. CF mode image).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that the auxiliary mode is a second image mode that combined a B-mode image with a separate image as disclosed in Satoh in order to allow a user easily observe characteristics of the tissues included within an image. Superimposing color information relating to blood flow through a vessel onto a B-mode image is one of a finite number of techniques which can be used to allow a user to assess characteristics of a patient’s vasculature with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that the auxiliary mode is a second image mode that combined a B-mode image with a separate image as disclosed in Satoh would yield the predictable result of allowing a user to easily observe characteristics of a tissue, such as blood flow through vessels, when assessing a patient.
Regarding claim 12, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Satoh further teaches “wherein: the set value includes a frequency parameter for adjusting the frequency of an ultrasonic wave emitted from the ultrasound module, a depth parameter for adjusting depth represented in the first ultrasound image, a brightness parameter for adjusting the brightness of the first ultrasound image, a dynamic range parameter for adjusting the dynamic range of the first ultrasound image, and/or a magnification parameter for adjusting the scale of a digital zoom for the first ultrasound image” (“The operator can set various control parameters with the console 100 at the time of performing the ultrasound diagnosis. As the control parameters, for example, selection results of a live mode and a freeze mode, set values of the display depth (depth), selection results of an ultrasound image generation mode, and the like can be mentioned” [0049]; “As described above, the number of driving target transducers and the driving frequency are changed according to the type of the ultrasound image forming mode” [0110]. Therefore, since the operator can set various control parameters such as display depth (i.e. depth parameter for adjusting depth represented in the first ultrasound image) and the driving frequency can be changed according to the type of the ultrasound image forming mode (i.e. a frequency parameter for adjusting the frequency of an ultrasonic wave emitted from the ultrasound module (i.e. ultrasound endoscope 12)), the set value includes a frequency parameter for adjusting the frequency of an ultrasonic wave emitted from the ultrasound module and a depth parameter for adjusting depth represented in the first ultrasound image.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that the set value includes a frequency parameter for adjusting the frequency of an ultrasonic wave emitted from the ultrasound module and a depth parameter for adjusting depth represented in the first ultrasound image as disclosed in Satoh in order to allow a user easily observe characteristics of the tissues included within an image. Setting the frequency parameter depending on the image mode and the depth parameter are two of a finite number of techniques which can be used to assess characteristics of a patient’s tissue with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that the set value includes a frequency parameter for adjusting the frequency of an ultrasonic wave emitted from the ultrasound module and a depth parameter for adjusting depth represented in the first ultrasound image as disclosed in Satoh would yield the predictable result of allowing a user to adjust imaging parameters so as to easily observe characteristics of a tissue when assessing a patient.
Regarding claim 13, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Satoh further teaches “wherein: the ultrasound module has the set value, and the processor is configured to acquire the set value from the ultrasound module” (See [0110] as discussed in claim 12 above and “For example, in order to generate an image for one frame (B mode image) in the B mode, all of the N ultrasound transducers 48 are used as driving target transducers. However, among the N ultrasound transducers 48, the driving frequency in the ultrasound transducer 48 on the end side is higher than that in the ultrasound transducer 48 in the vicinity of the center” [0108]; “In the PW mode, since the ultrasound transducer 48 corresponding to the direction designated by the operator is used as a driving target transducer, the driving frequency of the ultrasound transducer 48 is higher than the driving frequency of the other ultrasound transducers 48. […] Therefore, in the CF mode, the driving frequency in the ultrasound transducer 48 on the end side is higher than that in the ultrasound transducer 48 in the vicinity of the center, and the driving frequency of the ultrasound transducer 48 corresponding to the direction designated by the operator is higher than the driving frequency of the other ultrasound transducers 48” [0109]. Therefore, since the number of driving target transducers and the driving frequency thereof are changed according to the type of ultrasound image forming mode (i.e. B-mode, PW mode, CF mode), the ultrasound module (i.e. ultrasound endoscope 12) has a set value and the processor is configured to acquire the set value from the ultrasound module, such that an image can be generated in the according to the ultrasound image forming mode (i.e. B-mode, PW mode, CF mode).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that the ultrasound module has the set value and the processor is configured to acquire the set value from the ultrasound module as disclosed in Satoh in order to allow a user easily observe characteristics of the tissues included within an image produced with a desired imaging mode (i.e. B-mode, PW mode, CF mode). Acquiring the set value (i.e. driving frequency corresponding to the image forming mode) from the ultrasound module (i.e. ultrasound endoscope) is one of a finite number of techniques which can be used to acquire imaging data with which a patient can be assessed with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that the ultrasound module has the set value and the processor is configured to acquire the set value from the ultrasound module as disclosed in Satoh would yield the predictable result of enabling the processor to produce an image in a desired imaging mode (i.e. B-mode, PW-mode, CF mode) such that a patient can be assessed.
Regarding claim 15, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Kitamura further teaches “wherein: the processor is configured to detect a specific area from the first […] image by an AI approach” (See [0031], [0032] and [0033] as discussed with respect to claim 1 above. As shown in FIG. 1, the anomaly detection unit 33 is included within the video processor 31. Since the anomaly detection unit 33 is configured to perform detection of an anomaly candidate area L (i.e. specific area) using an artificial intelligence technology such as machine learning, the processor is configured to detect a specific area from the first image by an AI approach.).
Satoh teaches that the image is an “ultrasound” image (See [0041] as discussed with respect to claim 1 above.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that it acquires ultrasound images with an ultrasound endoscope as disclosed in Satoh in order to assess characteristics of the tissues being examined. An ultrasound endoscope is one of a finite number of devices which can be used to obtain ultrasound images to diagnose a target area with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that it acquires ultrasound images with an ultrasound endoscope as disclosed in Satoh would yield the predictable result of acquiring ultrasound images with which to assess a patient.
Regarding claim 16, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Kitamura further teaches “wherein: the detection assistance information is a trained model obtained by training a model on supervisory data that includes the second […] image” (See Kitamura: [0032] and [0033] as discussed with respect to claim 1 above. Therefore, the detection assistance information is a trained model (i.e. artificial intelligence technology) obtained by training a model on supervisory data that includes the second image.).
Satoh teaches that the image is an “ultrasound” image (See [0041] as discussed with respect to claim 1 above.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that it acquires ultrasound images with an ultrasound endoscope as disclosed in Satoh in order to assess characteristics of the tissues being examined. An ultrasound endoscope is one of a finite number of devices which can be used to obtain ultrasound images to diagnose a target area with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that it acquires ultrasound images with an ultrasound endoscope as disclosed in Satoh would yield the predictable result of acquiring ultrasound images with which to assess a patient.
Regarding claim 18, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, and Satoh further teaches “wherein: the ultrasound module is an ultrasound endoscope” (See Satoh: [0040] and [0041] as discussed with respect to claim 1 above. Therefore, the ultrasound module is an ultrasound endoscope (i.e. ultrasound endoscope 12).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura such that it acquires first and second ultrasound images with an ultrasound endoscope as disclosed in Satoh in order to assess characteristics of the tissues being examined. An ultrasound endoscope is one of a finite number of devices which can be used to obtain ultrasound images to diagnose a target area with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura such that it acquires first and second ultrasound images with an ultrasound endoscope as disclosed in Satoh would yield the predictable result of acquiring ultrasound images with which to assess a patient.
Regarding claim 19, Kitamura teaches “An […] endoscope comprising: the diagnostic assistance apparatus according to claim 1; and an ultrasound endoscope main body to which the ultrasound module is connected” (See [0004], [0022], [0023], [0024], [0029], [0030], [0031], [0032], [0033], [0041], [0093] as discussed in claim 1 above, and [0025] as discussed in claim 18 above. Therefore, Kitamura discloses an ultrasound endoscope comprising the diagnostic assistance apparatus according to claim 1 (see FIG. 1); and an ultrasound endoscope main body (i.e. endoscope 21) to which the ultrasound module is connected.).
Kitamura does not teach that the endoscope is an “ultrasound endoscope” or “an ultrasound endoscope main body to which the ultrasound module is connected”.
Satoh teaches “ultrasound endoscope” and “an ultrasound endoscope main body to which the ultrasound module is connected” (See Satoh: [0040] and [0041] as discussed with respect to claim 1 above. In this case, the operation unit 24 represents an ultrasound endoscope main body to which the ultrasound module (i.e. ultrasound endoscope 12 with ultrasound observation portion 36, see FIG. 2). Therefore, Satoh discloses an ultrasound endoscope (i.e. 12) with an ultrasound endoscope main body to which the ultrasound module (i.e. ultrasound endoscope 12 with ultrasound observation portion 36, see FIG. 2) is connected.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the endoscope of Kitamura such that it includes an ultrasound endoscope and an ultrasound endoscope main body to which the ultrasound module is connected (i.e. ultrasound probe 12 with ultrasound observation portion 36) as disclosed in Satoh in order to assess characteristics of the tissues being examined. An ultrasound endoscope is one of a finite number of devices which can be used to obtain ultrasound images to diagnose a target area with a reasonable expectation of success. Thus, modifying the endoscope of Kitamura such that it includes an ultrasound endoscope and an ultrasound endoscope main body to which the ultrasound module is connected (i.e. ultrasound probe 12 with ultrasound observation portion 36) as disclosed in Satoh would yield the predictable result of acquiring ultrasound images with which to assess a patient.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kitamura et al. US 2020/0126223 A1 “Kitamura” and further in view of Satoh et al. US 2020/0000439 A1 “Satoh” as applied to claim 1 above, and further in view of Rothberg et al. US 2017/0360415 A1 “Rothberg”.
Regarding claim 4, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, however the combination does not teach “wherein: the first operating mode is an operating mode used in a case in which the set value is within a specified range, and the second operating mode is an operating mode used in a case in which the set value is not within the specified range”.
Rothberg is within a related field of endeavor to the claimed invention because it involves a multi-modal ultrasound probe configured to operate in a plurality of operating modes (see [Abstract]).
Rothberg teaches “wherein: the first operating mode is an operating mode used in a case in which the set value is within a specified range, and the second operating mode is an operating mode used in a case in which the set value is not within the specified range” (“Some embodiments are directed to an ultrasound device including an ultrasound probe, including a semiconductor die, and a plurality of ultrasonic transducers integrated on the semiconductor die, the plurality of ultrasonic transducers configured to operate in a first mode associated with a first frequency range and a second mode associated with a second frequency range, wherein the first frequency range is at least partially non-overlapping with the second frequency range” [0005]; “wherein the control circuitry is configured to: responsive to receiving an indication of the first operating mode, obtain a first configuration profile specifying a first set of parameter values associated with the first operating mode; and control, using the first configuration profile, the ultrasound device to operate in the first operating mode, and responsive to receiving an indication of the second operating mode, obtain a second configuration profile specifying a second set of parameter values associated with the second operating mode, the second set of parameter values being different from the first set of parameter values; and control, using the second configuration profile, the ultrasound device to operate in the second operating mode” [0008]; “In some embodiments, the first frequency range may include frequencies in the range of 1-5 MHz. For example, the first frequency range may be contained entirely within a range of 1-5 MHz (e.g., within a range of 2-5 MHz, 1-4 MHz, 1-3 MHz, 2-5 MHz, and/or 3-5 MHz). […] detect ultrasound signals having frequencies in the first frequency range, ultrasound signals detected by the ultrasonic transducers may be used to form an image of a subject up to target depths within the subject, the target depths being in a range of 10-25 cm” [0035]; “In some embodiments, the second frequency range may be contained entirely within a range of 5-12 MHz (e.g., within a range of 5-10 MHz, 7-12 MHz, 5-7 MHz, 5-9 MHz, 6-8 MHz, 7-10 MHz, and/or 6-9 MHz). […] ultrasound signals detected by the ultrasonic transducers may be used to form an image of a subject up to target depths within the subject, the target depths being in a range of 1-10 cm “ [0036].
Therefore, in the first operating mode (i.e. first mode), the plurality of ultrasonic transducers are configured to operate according to a first set of parameter values (i.e. set value) and in a first frequency range in order to obtain image data at a target depth of 10-25 cm (See [0035]). Alternatively, in the second operating mode (i.e. second mode), the plurality of ultrasonic transducers are configured to operate according to a second set of parameter values (i.e. which are different from the first set of parameter values) and in a second frequency range which is at least partially non-overlapping with the first frequency range (i.e. the second frequency range is not within the first frequency range, i.e. the specified range) in order to obtain image data at a target depth of 1-10 cm (see [0036]). Since the first and second operating modes operate according to different parameter values and different frequency ranges, the first operating mode is an operating mode used in a case in which the set value is within a specified range (i.e. first frequency range) and the second operating mode is an operating mode used in a case in which the set value is not within the specified range (i.e. first frequency range).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura in view of Satoh such that the first operating mode is an operating mode used in a case in which the set value is within a specified range (i.e. a first frequency range) and the second operating mode is an operating mode used in a case in which the set value is not within the specified range (i.e. is within a second frequency range that is non-overlapping with the first frequency range) as disclosed in Rothberg in order to allow a user obtain images at different depths when assessing a patient.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kitamura et al. US 2020/0126223 A1 “Kitamura” and further in view of Satoh et al. US 2020/0000439 A1 “Satoh” as applied to claim 1 above, and further in view of Min et al. US 2016/0012572 A1 “Min”.
Regarding claim 6, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 5 above, however, the combination does not teach “wherein: the color information includes a plurality of chromatic pixels, the first operating mode is an operating mode used in a case in which the number of chromatic pixels having a chroma exceeding a first threshold from among the plurality of chromatic pixels is less than a second threshold, and the second operating mode is an operating mode used in a case in which the number is equal to or greater than the second threshold”.
Min is within a related field of endeavor to the claimed invention because it involves an electronic apparatus capable of providing clearer image quality to a user by expanding a dynamic range for each color based on information on a received input image and a color representation range of the electronic apparatus (see [0008]).
Min teaches “wherein: the color information includes a plurality of chromatic pixels, the first operating mode is an operating mode used in a case in which the number of chromatic pixels having a chroma exceeding a first threshold from among the plurality of chromatic pixels is less than a second threshold, and the second operating mode is an operating mode used in a case in which the number is equal to or greater than the second threshold” (“The determining of the representative colors may include determining at least one of color values and saturation values for the plurality of pixels; determining whether or not the plurality of pixels are a chromatic color or a neutral color based on at least one of the color values and the saturation values; and determining a representative color of each of pixels determined as the chromatic color among the plurality of pixels” [0010]; “In the determining of whether or not the plurality of pixels are the chromatic color or the neutral color, pixels of which the color values are not defined or the saturation values are a preset value or less may be determined as the neutral color, and pixels of which the saturation values exceed the preset value may be determined as the chromatic color” [0011]; “The representative color determiner 120 may determine whether or not the plurality of pixels are a chromatic color or a neutral color, based on the color value H and the saturation value Chroma. Specifically, the representative color determiner 120 may determine pixels of which the color value H is not defined (i.e., in a case of R=G=B) or the saturation value is a preset value or less, as the neutral color, and determine pixels of which the saturation value is the preset value or more, as the chromatic color” [0052].
Therefore, the color information includes a plurality of chromatic pixels (i.e. pixels with saturation values that exceed a preset value. In this case, when pixels have a color value that is not defined or a saturation value at a preset value or less, these pixels represent pixels obtained in the first imaging mode which is an operating mode used in a case in which the number of chromatic pixels having a chroma exceeding a first threshold from among the plurality of chromatic pixels is less than a second threshold (i.e. preset value, see [0011]). Conversely, when pixels have a color value that is defined or a saturation value which exceeds the preset value, these pixels represent pixels obtained in the second operating mode which is an operating mode used in a case in which the number [of chromatic pixels] is equal to or greater than the second threshold (i.e. preset value, see [0011]).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura in view of Satoh, such that the color information includes a plurality of chromatic pixels, the first operating mode is an operating mode used in a case in which the number of chromatic pixels having a chroma exceeding a first threshold from among the plurality of chromatic pixels is less than a second threshold, and the second operating mode is an operating mode used in a case in which the number is equal to or greater than the second threshold as disclosed in Min in order to allow a user to obtain images with clearer image quality by expanding a dynamic range for each colored pixel (see Min: [0008]) and thus allow for features of an image to be more easily examined. Identifying whether image pixels are chromatic by comparing saturation values to a preset value is one of a finite number of techniques which can be used to assess pixels such that the dynamic range for each color can be expanded to provide a clearer image with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura in view of Satoh, such that the color information includes a plurality of chromatic pixels, the first operating mode is an operating mode used in a case in which the number of chromatic pixels having a chroma exceeding a first threshold from among the plurality of chromatic pixels is less than a second threshold, and the second operating mode is an operating mode used in a case in which the number is equal to or greater than the second threshold as disclosed in Min would yield the predictable result of improving image quality (i.e. by expanding dynamic range for each colored pixel) to allow a user to better understand features included within an image.
Claim(s) 7, 11, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kitamura et al. US 2020/0126223 A1 “Kitamura” and further in view of Satoh et al. US 2020/0000439 A1 “Satoh” as applied to claim 10 above, and further in view of Yao et al. US 2015/0087980 A1 “Yao”.
Regarding claim 7, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, however, the combination does not teach “wherein: the reference information includes text information assisting with observation of the target area of observation”.
Yao is within a related field of endeavor to the claimed invention because it involves an ultrasound diagnostic apparatus that operates in a mode to perform a Contrast Harmonic Imaging (CHI) process of a Tissue Harmonic Imaging (THI) process (see [0058]) as well as a Doppler mode (see [0047]) and elastography mode (see [0050]).
Yao teaches “wherein: the reference information includes text information assisting with observation of the target area of observation” (“Further, the image generating unit 14 synthesizes text information of various parameters, scale graduations, body marks, and the like with the ultrasound image data” [0041]. Therefore, since the image generating unit 14 synthesized text information with the ultrasound image data, the image generating unit 14 generates reference information which includes test information assisting with observation of the target area of observation.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura in view of Satoh such that the reference information includes text information assisting with the observation of the target area of observation as disclosed in Yao in order to provide a user with more information about an image. Providing text information is one of a finite number of techniques which can be used to allow a user to better understand structures included within an image with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura in view of Satoh such that the reference information includes text information assisting with the observation of the target area of observation as disclosed in Yao would yield the predictable result of providing a user with more information about structures included within an image when assessing a patient.
Regarding claim 11, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 10 above, however the combination does not teach “wherein: the first image mode is THI mode, CH mode, or CHI mode, and the second image mode is Doppler mode or elastography mode”.
Yao teaches “wherein: the first image mode is THI mode, CH mode, or CHI mode, and the second image mode is Doppler mode or elastography mode” (“The harmonic component image data is ultrasound image data generated in a mode to perform a Contrast Harmonic Imaging (CHI) process or a Tissue Harmonic Imaging (THI) process. The B-mode processing unit 12 shown in FIG. 1 is able to change the frequency bandwidth to be realized in a picture, by changing detected frequencies. More specifically, the B-mode processing unit 12 is able to separate B-mode data of harmonic components, which are non-linear signals, from the B-mode data” [0058]; “Further, when operating in the THI mode, for example, the B-mode processing unit 12 separates B-mode data of the second harmonic component from the B-mode data corresponding to one frame obtained by scanning the subject P. Further, the image generating unit 14 generates harmonic component image data in which side-lobe effects are reduced, from the B-mode data of the second harmonic component” [0060]; “Further, when operating in the color Doppler mode, the image generating unit 14 generates the color Doppler image data from the bloodstream Doppler data. When operating in the power Doppler mode, the image generating unit 14 generates the power Doppler image data from the bloodstream Doppler data. Further, when operating in the tissue Doppler mode, the image generating unit 14 generates the tissue Doppler image data from the tissue Doppler data” [0047]; “Further, other than the ultrasound image data described above, the image generating unit 14 is capable of generating various types (modes) of ultrasound image data. For example, when operating in an elastography mode to realize elastography imaging, the image generating unit 14 generates image data (elasticity image data) in which hardness (elastic modulus) of a tissue is expressed in an image, from the reflected-wave data (the reflected-wave signals) on which a signal processing process has been performed by the Doppler processing unit 13” [0050].
Therefore, the since image generating unit 14 generates harmonic component image data (i.e. generated in a mode to perform a Contrast Harmonic Imaging (CHI) process or a Tissue Harmonic Imaging (THI) process), Doppler image data (i.e. generated in a mode to perform color Doppler, power Doppler, or tissue Doppler) or elasticity image data (i.e. elastography imaging mode) depending on the mode the ultrasound probe is operating in, the first mode is THI, mode, CH mode or CHI mode, and the second image mode in Doppler mode or elastography mode.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura in view of Satoh such that the first imaging mode is THI, mode, CH mode, or CHI mode, and the second image mode is Doppler mode or elastography mode as disclosed in Yao in order to allow a user to obtain multiple image types when assessing the status of a patient. THI mode, CHI mode, Doppler mode (i.e. color Doppler, power Doppler, or tissue Doppler) and elastography mode are four of a finite number of operating modes that can be used to obtain ultrasound image data with a reasonable expectation of success. Thus, modify the diagnostic assistance apparatus of Kitamura in view of Satoh such that the first imaging mode is THI, mode, CH mode, or CHI mode, and the second image mode is Doppler mode or elastography mode as disclosed in Yao would yield the predictable result of enabling a user to obtain imaging data in multiple modes and therefore distinguish different characteristics of the tissue present within a patient.
Regarding claim 17, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, however, the combination does not teach “wherein: the processor is configured to differentiate a frequency at which to detect the specific area from the first ultrasound image, a precision with which to detect the specific area from the first ultrasound image, and/or a target to be detected as the specific area from the first ultrasound image according to the reference information, the auxiliary image mode, and/or the set value”.
Yao teaches “wherein: the processor is configured to differentiate a frequency at which to detect the specific area from the first ultrasound image, a precision with which to detect the specific area from the first ultrasound image, and/or a target to be detected as the specific area from the first ultrasound image according to the reference information, the auxiliary image mode, and/or the set value” (“Further, when operating in the color Doppler mode, the image generating unit 14 generates the color Doppler image data from the bloodstream Doppler data. When operating in the power Doppler mode, the image generating unit 14 generates the power Doppler image data from the bloodstream Doppler data. Further, when operating in the tissue Doppler mode, the image generating unit 14 generates the tissue Doppler image data from the tissue Doppler data“ [0047]; “First, the controlling unit 17 sets frequency with which it is possible to acquire data in each of the modes that are set in the setting information, i.e., sets a frame rate ("fr") for each of the modes. For example, the controlling unit 17 sets a frame rate in the B-mode to "fr: A", sets a frame rate in the color Doppler mode to "fr: B", and sets a frame rate in the enhanced mode to "fr: C"” [0084].
According to the Applicant’s specification “Doppler mode is an example of an "auxiliary image mode, which is an image mode other than the main image mode" according to the technology of the present disclosure” [0041]; “Doppler mode is an image mode in which hemodynamics identified using the Doppler effect are superimposed onto a B-mode image as color information” [0042]. Therefore, Doppler mode is an auxiliary image mode.
Therefore, since the controlling unit 17 sets the frequency with which it is possible to acquire data (i.e. said data including the specific area) in each of the modes (i.e. including color Doppler, power Doppler and/or tissue Doppler) that are utilized by the ultrasound probe 1, the processor is configured to differentiate a frequency at which to detect the specific area from the first ultrasound image according to the auxiliary image mode (i.e. Doppler mode).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura in view of Satoh such that the processor is configured to differentiate a frequency at which to detect the specific area from the first ultrasound image according to the auxiliary image mode as disclosed in Yao in order to acquire ultrasound image data which as user can view to assess a patient. When a Doppler imaging mode is selected, the ultrasound probe operates at a specific frequency to obtain Doppler ultrasound images. Thus, modifying the diagnostic assistance apparatus of Kitamura in view of Satoh such that the processor is configured to differentiate a frequency at which to detect the specific area from the first ultrasound image according to the auxiliary image mode as disclosed in Yao would yield the predictable result of activating an ultrasound probe such that it operates at a specific frequency corresponding to an auxiliary image mode (i.e. Doppler imaging mode) in order to acquire image data.
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kitamura et al. US 2020/0126223 A1 “Kitamura” and further in view of Satoh et al. US 2020/0000439 A1 “Satoh” as applied to claim 1 above, and further in view of Matsumoto US 2020/0178846 A1 “Matsumoto”.
Regarding claim 8, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, however, the combination does not teach “wherein: the reference information includes a measurement line used for measurement in the target area of observation”.
Matsumoto is within a related field of endeavor to the claimed invention because it involves an acoustic wave measurement apparatus which displays a measurement line K3 (see [Abstract] and FIG. 12).
Matsumoto teaches “wherein: the reference information includes a measurement line used for measurement in the target area of observation” (“Specifically, as shown in FIG. 12, the measurement unit 31 determines two longest distance points on the boundary surrounding the gallbladder region on the ultrasound image Ib as measurement points K1 and K2, and measures the length of a line K3 connecting the measurement points K1 and K2 to each other. In FIG. 12, the length of the line K3 is 56 mm. The measurement unit 31 displays the measurement points K1 and K2 and the line K3 and the measurement target and the length as a measurement result R, that is, gallbladder: 56 mm, on the image display unit 14 through the image processing and storage unit 26 and the display control unit 27” [0083]. As shown in FIG. 12, the line K3 (i.e. measurement line) is displayed on the ultrasound image. Therefore, the display unit 14 displays reference information which includes a measurement line (i.e. line k3) used for measurement in the target area of observation (i.e. the gallbladder, for example).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura in view of Satoh such that the reference information includes a measurement line (i.e. line k3) used for measurement in the target area of observation as disclosed in Matsumoto in order to allow a user to easily distinguish the position at which measurements within an ultrasound image are performed. Providing a measurement line is one of a finite number of techniques which can be used to provide a user with information about a structure within an image with a reasonable expectation of success. Thus, modify the diagnostic assistance apparatus of Kitamura in view of Satoh such that the reference information includes a measurement line (i.e. line k3) used for measurement in the target area of observation as disclosed in Matsumoto would yield the predictable result of allowing a user to view a measurement of a structure within an ultrasound image.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kitamura et al. US 2020/0126223 A1 “Kitamura” and further in view of Satoh et al. US 2020/0000439 A1 “Satoh” as applied to claim 1 above, and further in view of Akkus et al. US 2021/0219944 A1 “Akkus”.
Regarding claim 9, Kitamura in view of Satoh discloses all features of the claimed invention as discussed with respect to claim 1 above, however, the combination does not teach “wherein: the reference information includes treatment assistance information assisting with treatment using fine-needle aspiration”.
Akkus is within a related field of endeavor to the claimed invention because it involves systems, methods and media for automatically localizing and diagnosing thyroid nodules, the system including an ultrasound machine (see [Abstract]).
Akkus teaches “wherein: the reference information includes treatment assistance information assisting with treatment using fine-needle aspiration” (“In some embodiments, combined B-mode, color Doppler, and/or SWE US images can be collected from subject's (e.g., dozens, hundreds, etc.) that have received, or will receive, a fine needle aspiration biopsy” [0094]; “At 516, process 500 can determine, based on the output, whether the nodule is relatively likely to be malignant, or benign. In some embodiments, based on the output, process 500 can provide a recommendation of whether the nodule should be biopsied” [0103]. Therefore, since the processor (i.e. CNN, see [0095]: “At 504, process 500 can train a CNN to classify nodules in the B-mode, color Doppler, and/or SWE US images as benign or malignant”) determines whether the nodule is malignant or benign and provides a recommendation of whether the nodule should be biopsied (i.e. with fine-needle aspiration biopsy, see [0094]), the processor provides reference information which includes treatment assistance information assisting with treatment using fine-needle aspiration.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the diagnostic assistance apparatus of Kitamura in view of Satoh such that the reference information includes treatment assistance information assisting with treatment using fine-needle aspiration as disclosed in Akkus in order to provide a user with a recommendation to perform a fine-needle aspiration procedure and thus treat a patient. Utilizing a CNN to classify nodules as malignant or benign and providing a recommendation of whether the nodule should be biopsied (See Akkus: [0103]) with fine-needle aspiration is one of a finite number of techniques which can be used to effectively diagnose and treat nodules within a patient with a reasonable expectation of success. Thus, modifying the diagnostic assistance apparatus of Kitamura in view of Satoh such that the reference information includes treatment assistance information assisting with treatment using fine-needle aspiration as disclosed in Akkus would yield the predictable result of providing a user with a recommendation of whether to perform a biopsy (i.e. fine-needle aspiration, see Akkus: [0094]) based on a determination of whether a nodule is benign or malignant (i.e. by a CNN, see Akkus: [0095]), such that a treatment can subsequently be performed.
Allowable Subject Matter
Claim 14 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 14, the examiner acknowledges that Kitamura in view of Satoh does not teach “wherein: a text image that can be used to identify the set value is combined with a frame containing the first ultrasound image, and the processor is configured to: identify the set value by performing image recognition processing on the text image; and switch between the first operating mode and the second operating mode according to the identified set value”.
Furthermore, the examiner acknowledges that the prior art references of Rothberg, Min, Yao, Matsumoto, Akkus, and Sato, both individually or together, do not teach the above limitations alone or in combination with the other limitations of claim 1 on which this claim depends.
Additionally, no prior art references were found to teach the above limitations alone or in combination with the other limitations of claim 1 on which this claim depends.
Therefore, claim 14 would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sato US 2022/0401075 A1 “Sato” is pertinent to the applicant’s disclosure because it discloses “Moreover, the signal processing circuitry 130 performs signal processing to perform harmonic imaging to visualize a harmonic component. The harmonic imaging includes CHI and THI. Furthermore, in CHI or THI, for example, phase modulation (PM) called pulse inversion is known as a scanning method” [0033].
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN E SEBASTIAN whose telephone number is (571)272-6190. The examiner can normally be reached Mon.- Fri. 7:30-4:30 (Alternate Fridays Off).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne M Kozak can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KAITLYN E SEBASTIAN/Examiner, Art Unit 3797