DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4 and 9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With regards to claim 4, in line 2, the claim recites “the first rectangular frame is a rectangular frame that circumscribes the lesion region”, wherein a definition of “circumscribes” is “to surround” or “to enclose within bounds”. However, claim 4 is dependent upon claim 3 which sets forth that the first range is defined by a “first rectangular frame that surrounds the lesion region”. The claim therefore is indefinite as it is unclear as to whether Applicant meant “circumscribes” to mean something other than “to surround/enclose…”, and if this were not Applicant’s intention, then the above limitation appears to be redundant. For examination purposes, Examiner assumes that “circumscribes” is meant to be defined by its definition of “to surround”. Claim 9 is similarly rejected.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 12-15, 17 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Meguro (US Pub No. 2021/0192738) in view of Lee et al. (US Pub No. 2016/0051220).
With regards to claims 1, 19 and 20, Meguro discloses a non-transitory computer-readable storage medium, diagnosis support method and device comprising:
a processor (200) (paragraph [0056], Figures 1-2), wherein:
the processor is configured to:
acquire an ultrasound image (paragraphs [0106]-[0107], referring to the image acquisition unit (22) configured for importing an endoscopic image (18), paragraph [0210], referring to the image being generated by an ultrasound diagnostic apparatus; Figures 1-3);
display the acquired ultrasound image on a display device (screen (40) of monitor (400)) (paragraphs [0128]-[0129], referring to the screen (40) of the monitor (400) displaying an observation image display region (42) which is a region in which the image as the observation image is displayed; Figures 1-3, 5), and
display, in the ultrasound image, a first mark (BB) capable of specifying a lesion region (LS) detected from the ultrasound image within the ultrasound image (42) (paragraphs [0114]-[0117], referring to the lesion detection unit (26) being a processing unit that detects the lesion shown in the image acquired via the image acquisition unit (22) and generates information indicating the position of the lesion and lesion type information indicating the lesion type; paragraph [0129], referring to a lesion being detected from the image, wherein a bounding box (BB) surrounds the region of the lesion (LS) in the image is displayed to overlap the image; Figures 1-3, 5),
display a second mark (44) capable of specifying an organ region detected from the ultrasound image (paragraphs [0110]-[0113], referring to the site information acquisition unit (24) being a processing unit that acquires site information indicating the site of the object in the human body, which is shown in the image acquired via the image acquisition unit, wherein the “site” is the human organ and the site information may be a label corresponding to the name of the organ; paragraph [0130], referring to the screen (40) of the monitor (400) including a display region corresponding to the site information report region (44) which reports the site information (i.e. organ label/mark); Figures 1-3, 5), and
the first mark is displayed in a state of being emphasized more than other regions (paragraph [0129], referring to the bounding box (BB) being an example of the report mode of emphasizing the position of the lesion LS; Figure 5, wherein the bounding box (BB) emphasizes the lesion (LS) over other regions present in the image (42)).
However, Meguro does not specifically disclose that the second mark is specifically displayed within the ultrasound image.
Lee et al. disclose an ultrasound diagnosis apparatus comprising a region
determining unit (420) that determines a bile duct region (i.e. organ region) and a tumor candidate region (i.e. lesion region) in the ultrasound image (Abstract; paragraphs [0067], [0069]; Figure 4). A display unit (440) displays a bile duct region and a gallbladder region in different colors and may also display blood vessels around the bile duct region or the gallbladder region in different colors, wherein the tumor candidate regions may also be displayed using various types of indicators (paragraph [0090]; Figures 4, 6, note that the lesion/tumor region and organ region detected from the ultrasound image are displayed within the ultrasound image). An arrow or circular dotted line may be a type of indicator, wherein indicators of the tumor candidate regions may be displayed in various modes such as colors and figures (paragraph [0090]; Figures 6-8, note that the lesion/tumor region is emphasized more than a second mark (i.e. color) specifying an organ region detected from the ultrasound image). A resection pattern of the bile duct region may be acquired by comparing a shape of the tumor candidate region and the bile duct region with a predetermined pattern (paragraphs [0069]-[0070]).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have the second mark of Meguro be specifically displayed within the ultrasound image, as taught by Lee et al., in order to be able to compare the shape of the lesion region with the organ region and determine a resection pattern for the organ region (paragraphs [0069]-[0070]). Alternatively, it would have been obvious to one of ordinary skill in the art to include and display a second mark specifying the organ region in the ultrasound image of Meguro, as Meguro requires specifying the organ region and Lee et al. teach a known effective technique for specifying the organ by displaying a second mark specifying the organ region within the ultrasound image. That is, using the known technique for specifying an organ region, as desired by Meguro, by displaying a second mark capable of specifying the organ region within the ultrasound region, as taught by Lee et al., would have been obvious to one of ordinary skill in the art.
With regards to claim 2, Meguro discloses that the first mark (BB) is a mark capable of specifying an outer edge of a first range in which the lesion region is present (paragraph [0129], referring to the “bounding box (BB) surrounding the region of the lesion LS” and being a rectangular frame border, which, as depicted in Figure 5, covers an outer edge of a first range in which the lesion region is present; Figure 5).
With regards to claim 3, Meguro discloses that the first range is defined by a first rectangular frame that surrounds the lesion region (paragraph [0129], referring to the bounding box (BB) surrounding the region of the lesion and being a rectangular frame border; Figure 5).
With regards to claim 4, Meguro discloses that the first rectangular frame is a rectangular frame that circumscribes/surrounds the lesion region (paragraph [0129], referring to the bounding box (BB) surrounding the region of the lesion and being a rectangular frame border; Figure 5).
With regards to claim 5, Meguro discloses that the first mark (BB) is a mark in which at least a part of the first rectangular frame is formed in a visually specifiable manner (paragraph [0129], referring to the bounding box (BB) being a rectangular frame border and emphasizing the position of the lesion (LS); Figure 5, note that the bounding box (BB) is formed in such a manner that a doctor/person viewing the display can easily recognize the box (BB), and therefore the mark (BB) is formed in a visually specifiable manner).
With regards to claim 6, Meguro disclose that the first rectangular frame (BB) surrounds the lesion region (LS) in a rectangular shape as seen in front view (see Figure 5), and the first mark (BB) is composed of a plurality of first images (i.e. four L-shaped broken lines form the corners of box (BB)) assigned to a plurality of corners including at least opposite corners of four corners of the first rectangular frame (paragraph [0129], referring to the bounding box (BB) being a combination of L-shaped broken lines indicating the four corners of a quadrangle, such as a “rectangular” frame border; Figure 5).
With regards to claim 7, Lee et al. disclose that the second mark is a mark (i.e. color mark) capable of specifying an outer edge of a second range in which the organ region is present (paragraph [0090], referring to the bile duct region and gallbladder region (i.e. organ regions) may be displayed in different colors, and therefore it would follow that the borders of the organs would be defined by the border of respective color regions which define an outer edge/border of a second range [within the ultrasound image] in which the organ is present).
With regards to claim 12, Meguro discloses that the ultrasound image is a moving image including a plurality of frames (paragraphs [0031]-[0032], referring acquiring a time-series medical image, which may be a video (i.e. moving image)), and in a case in which N is a natural number equal to or larger than 2, the processor is configured to display the first mark in the ultrasound image in a case in which the lesion region is detected from N consecutive frames among the plurality of frames (paragraphs [0031]-[0032], referring to the time-series image, which would comprise more than one image (i.e. N is equal to at least 2); paragraph [0117], referring to the lesion detection unit (26) executing the lesion detection processing of the site on each frame image (18B) for all of the plurality of frame images (18B) acquired in a time-series manner; paragraph [0129], referring to the video being displayed in real time as depicted in Figure 5, and thus the first mark (BB) will be displayed in real-time and in the case the lesion is detected, the first mark (BB) will be displayed; Figure 5).
With regards to claim 13, Meguro disclose that the ultrasound image is a moving image including a plurality of frames (paragraphs [0031]-[0032], referring acquiring a time-series medical image, which may be a video (i.e. moving image)), and in a case in which M is a natural number equal to or larger than 2, the processor is configured to display the second mark in the ultrasound image in a case in which the organ region is detected from M consecutive frames among the plurality of frames (paragraphs [0031]-[0032], referring to the time-series image, which would comprise more than one image (i.e. N is equal to at least 2); paragraph [0113], referring to the site information acquisition unit (24) executing the site recognition processing on each frame image for all of the plurality of frame images (18B) acquired in a time-series manner; paragraph [0129], referring to the video being displayed in real time as depicted in Figure 5, and thus the second mark will be displayed in real-time and in the case the organ region is detected, the second mark will be displayed; Figure 5).
With regards to claim 14, Meguro disclose that the ultrasound image is a moving image including a plurality of frames (paragraphs [0031]-[0032], referring acquiring a time-series medical image, which may be a video (i.e. moving image)), in a case in which N and M are natural numbers equal to or larger than 2, the processor is configured to: display the first mark in the ultrasound image in a case in which the lesion region is detected from N consecutive frames among the plurality of frames, and display the second mark in the ultrasound image in a case in which the organ region is detected from M consecutive frames among the plurality of frames, and N is a value smaller than M (paragraphs [0031]-[0032], referring to the time-series image, which would comprise more than one image (i.e. N is equal to at least 2); paragraph [0113], referring to the site information acquisition unit (24) executing the site recognition processing on each frame image for all of the plurality of frame images (18B) acquired in a time-series manner; paragraph [0117], referring to the lesion detection unit (26) executing the lesion detection processing of the site on each frame image (18B) for a part of the plurality of frame images (18B) acquired in a time-series manner [note that lesion detection occurring for only “a part” of the plurality of frame images would result in N (i.e. frames associated with lesion region detection) being smaller than M (i.e. frames associated with region detection, which can correspond to “all” of the plurality of frame images); paragraph [0129], referring to the video being displayed in real time as depicted in Figure 5, and thus the first mark and second mark will be displayed in real-time and in the case the organ region is detected, the second mark will be displayed and in the case the lesion is detection, the first mark will be displayed; Figure 5).
With regards to claim 15, Meguro discloses that the processor is configured to notify of detection of the lesion region by causing a sound reproduction device to output a sound and/or a vibration generator to generate a vibration in a case in which the lesion region is detected (paragraphs [0006]-[0007], [0012], [0036], referring to lesion type information being reported; paragraph [0102], referring to the sound processing unit (209) generates a sound signal representing information reported [which includes lesion type information as indicated in the above paragraphs] as a sound, wherein the sound output can be a message, a sound guidance or a warning sound).
With regards to claim 17, Meguro discloses that the processor is configured to detect the lesion region and the organ region from the ultrasound image (paragraphs [0114]-[0117], referring to the lesion detection unit (26) being a processing unit that detects the lesion shown in the image acquired via the image acquisition unit (22) and generates information indicating the position of the lesion and lesion type information indicating the lesion type; paragraphs [0110]-[0113], referring to the site information acquisition unit (24) being a processing unit that acquires site information indicating the site of the object in the human body, which is shown in the image acquired via the image acquisition unit, wherein the “site” is the human organ and the site information may be a label corresponding to the name of the organ). Lee et al. also discloses this limitation (Abstract, paragraphs [0067], [0069], referring to a region determining unit (420) that determines a bile duct region (i.e. organ region) and a tumor candidate region (i.e. lesion region) in the ultrasound image; Figure 4).
Claim(s) 8-11 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Meguro in view of Lee et al. as applied to claims 1 and 7 above, and further in view of Cho et al. (US Pub No. 2015/0230773).
With regards to claim 8, as discussed above, the above combined references meet the limitations of claim 7. However, though Lee et al. do disclose that the second range is defined using a second marking in the form of a color marking (paragraph [0090]), the above combined references do not specifically disclose the second marking comprises a second rectangular frame that surrounds the organ region.
Cho et al. disclose an apparatus and method for lesion detection, wherein the apparatus comprises a lesion candidate detector (33) and an adjacent object detector (35), wherein the adjacent object detector detects a plurality of anatomical objects from a medical image (Abstract; paragraphs [0077]-[0079]; Figures 3, 6-7). As illustrated in Figure 7, the detected anatomical objects, such as the fat region (72), etc., are displayed and marked using a rectangular box, wherein, while a rectangular box is used in Figure 7, various other visual markings such as a color highlight may be used (paragraphs [0094]-[0095]; Figure 7, note that the detected anatomical objects, such as 72, are surrounded by a rectangular box).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to substitute the second marking of the above combined references with a marking in the form of a second rectangular frame that surrounds the organ region, as taught by Cho et al., as the substitution of one known marking for another yields predictable results (i.e. visually distinguishing the organ region) to one of ordinary skill in the art. One of ordinary skill in the art would have been able to carry out such a substation and the results are reasonably predictable.
With regards to claim 9, Cho et al. disclose that the second rectangular frame (i.e. 72) is a rectangular frame that circumscribes the organ region (paragraph [0095]; see Figure 7).
With regards to claim 10, Cho et al. disclose that the second mark is a mark in which at least a part of the second rectangular frame (i.e. 72) is formed in a visually specifiable manner (paragraph [0095]; see Figure 7, wherein the second rectangular frame, such as the one surrounding the region (72), is visually recognizable/specifiable).
With regards to claim 11, Cho et al. disclose that the second rectangular frame surrounds the organ region in a rectangular shape as seen in front view (paragraph [0095]; see Figure 7, for example, the rectangular box (72 or 73)), and the second mark is composed of a plurality of second images assigned to center portions of a plurality of sides including at least opposite sides of four sides of the second rectangular frame (paragraph [0095]; see Figure 7, wherein the rectangular boxes for the rib, glandular tissue, etc. comprise of a plurality of images of lines assigned to center portions of a plurality of sides including at least opposite sides of four sides of the rectangular frame/box).
With regards to claim 16, as discussed above, the above combined references meet the limitations of claim 1. However, the above combined references do not specifically disclose that the processor is configured to: display a plurality of screens including a first screen and a second screen on the display device, display the ultrasound image on the first screen and the second screen, and separately display the first mark and the second mark in the ultrasound image on the first screen and in the ultrasound image on the second screen.
Cho et al. disclose an apparatus and method for lesion detection, wherein the apparatus comprises a lesion candidate detector (33) and an adjacent object detector (35), wherein the adjacent object detector detects a plurality of anatomical objects from a medical image (Abstract; paragraphs [0077]-[0079]; Figures 3, 6-7). The apparatus may include a plurality of individual object detectors (61, 63, 65, 67), wherein color-coded images representative of the different objects (i.e. skin, subcutaneous fat, glandular tissue, pectoralis muscle detector, etc.) may obtained and such color-coded images may be displayed to a user on a display screen (paragraph [0092]; Figure 6). The display may be a multi-screen display and comprise a single physical screen that can include multiple displays that are managed as separate logical displays permitting different content to be displayed on separate displays although part of the same physical screen (paragraph [0145], note that such a physical screen that includes multiple displays corresponds to a plurality of screens, including a first screen and a second screen, on the display device, wherein displaying “different content” on the screens would comprise of separately displaying the different images [which comprise different color-coded marks] of Figures 5 or 6 in the plurality of screens; Figures 5-6).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to have the processor of the above combined references be configured to display a plurality of screens including a first screen and a second screen on the display device, display the ultrasound image on the first screen and the second screen, and separately display the first mark and the second mark in the ultrasound image on the first screen and in the ultrasound image on the second screen, as taught by Cho et al., in order to successfully permit the display of different content on separate displays of the same physical screen, thereby allowing a user to easily distinguish differences between the different detected objects (paragraph [0145]; Figures 5-6).
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Meguro in view of Lee et al. as applied to claim 1 above, and further in view of DiMaio et al. (US Pub No. 2009/0036902).
With regards to claim 18, as discussed above, the above combined references meet the limitations of claim 1. Further, Meguro discloses an endoscope comprising the diagnosis support device according to claim 1 and an endoscope body to which the diagnosis support device is connected (paragraph [0210], referring to the endoscope scope (100); Figure 1).
However, though Meguro further discloses that their invention is not limited to the endoscopic image and may be an ultrasound diagnostic apparatus, the above combined references do not specifically disclose that the endoscope an “ultrasound” endoscope.
DiMaio et al. disclose a laparoscopic ultrasound probe (150) providing 2D ultrasound slices of an anatomic structure to a processor, wherein the images can be overlaid on stereoscopic endoscope images (Abstract; paragraph [0045]). Lesions may be marked on the images (paragraphs [0090]-[0091]).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to substitute the endoscope/endoscope body of the above combined references with an ultrasound endoscope, as taught by DiMaio et al., as the substitution of one known endoscope instrument for another yields predictable results (i.e. effective marking of lesions) to one of ordinary skill in the art. One of ordinary skill in the art would have been able to carry out such a substitution and the results are reasonably predictable.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHERINE L FERNANDEZ whose telephone number is (571)272-1957. The examiner can normally be reached Monday-Friday 9:00 AM - 5:30 PM (ET).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHERINE L FERNANDEZ/Primary Examiner, Art Unit 3798