DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-9, 11-12, and 16-20, is/are rejected under 35 U.S.C. 103 as being unpatentable over Enoki (US PGPUB 2019/0125319 A1) in view of Yoshida (US PGPUB 2017/0337683 A1).
As per claim 1, Enoki discloses an endoscope system (Enoki, Fig. 1, and paragraphs 35-36, discloses endoscopic surgical system) comprising:
an audio input device (Enoki, Fig. 1:3709, and paragraphs 35 and 57, discloses microphones);
an image sensor that images a subject (Enoki, paragraphs 36, and 38, discloses an endoscope and a display device that displays images captured by the endoscope); and
a processor (Enoki, Fig. 1:3007:3009, and paragraphs 36, 53 and 98), wherein the processor is configured to:
set, in a case where the specific subjects are detected from the plurality of the medical images, an audio recognition dictionary according to the detected plurality of types of specific subjects (Enoki, paragraphs 108-109, discloses in the case where the progress of a procedure or surgery being performed by a surgeon is detected by the surgery-information-detecting unit 150, in order to improve accuracy of voice recognition, the sound-output-control unit 100B, by the voice-recognizing unit 140, may select a dictionary 160 to be used in voice recognition on the basis of the progress of a procedure or surgery being performed by the surgeon. For example, in the case where a procedure in which the use of a predetermined device is highly probable, a dictionary 160 having high accuracy of recognition of commands that can be received by that device may be selected. In addition, in the case where surgical instruments to be used differ according to the process of the surgery, a dictionary 160 having high accuracy of recognition of surgical instruments to be used during that progress may be selected); and
perform audio recognition on audio input to the audio input device after the setting, using the set audio recognition dictionary (Enoki, paragraphs 108-109, discloses in order to improve accuracy of voice recognition, the sound-output-control unit 100B, by the voice-recognizing unit 140, may select a dictionary 160 to be used in voice recognition on the basis of the progress of a procedure or surgery being performed by the surgeon).
Enoki does not explicitly disclose acquire a plurality of medical images obtained by the image sensor imaging the subject in chronological order;
perform image recognition on the acquired plurality of medical images to detect a plurality of types of specific subjects from the plurality of medical images;
Yoshida discloses acquire a plurality of medical images obtained by the image sensor imaging the subject in chronological order (Yoshida, paragraphs 47, and 49, discloses the file creation section 11a may create an image file of an endoscopic movie in a predetermined time period including a point of time of specimen acquisition as an image at the time of cell acquisition or create an image file of at least either of still images immediately before and immediately after specimen acquisition as an image at the time of cell acquisition);
perform image recognition on the acquired plurality of medical images to detect a plurality of types of specific subjects from the plurality of medical images (Yoshida, paragraph 47, discloses the specimen state determination section 11b may detect an image part of, e.g., a polyp by image recognition and determines that it is the time of specimen acquisition when a change in which an image feature of, e.g., the polyp disappears from the image part occurs.);
it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Enoki teachings by implementing an image recognition technique to the system, as taught by Yoshida.
The motivation would be to provide an improved system for efficiently assisting in bio-related examination, treatment, work, processing making it easy to ensure reliability of the examination (paragraph 30), as taught by Yoshida.
As per claim 2, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein the processor is configured to: detect at least one lesion as one of the specific subjects (Yoshida, paragraphs 47, and 102, discloses detect an image part of, e.g., a polyp by image recognition);
further perform, in a case where the at least one lesion is detected from the plurality of medical images, discrimination processing on the detected at least one lesion using image recognition (Yoshida, paragraph 102, discloses lesion diagnosis); and
set, in a case where a result of discrimination is obtained, a predetermined specific audio recognition dictionary (Yoshida, paragraphs 45, 102, and 118, discloses The biopsy type information may be determined by determining a shape of a used device via an image or determining a process of an operation from an image, or may be determined by recognition of a doctor's voice).
As per claim 3, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein, in a case where any one of an imaging start instruction of the plurality of medical images, an operation to an operation device connected to the endoscope system (Yoshida, paragraphs 45, 65, and 102-1103, discloses the control section 11 determines whether or not the clinician 51 performs an operation to pick up an image. If the clinician 51 performs an image pickup operation, in step S11, the file creation section 11a adds related information to an image from the image input section 12 and creates an image file of the image. The related information in this case may include no biopsy flag. Here, if the clinician 51 performs the image pickup operation together with an operation to designate the image as an image at the time of cell acquisition, a biopsy flag is included in the image file to be recorded) and an input of a wake word for the audio input device, is performed, the processor sets a predetermined specific audio recognition dictionary (Yoshida, paragraphs 117-118, discloses the control section 31 determines whether or not a text or a voice “biopsy” is inputted by the character input section 33 or the voice input section 34).
As per claim 4, Enoki in view of Yoshida further discloses the endoscope system according to claim 3, wherein, in a case where the imaging start instruction of the plurality of medical images is performed, the processor sets an audio recognition dictionary according to the imaging start instruction (Yoshida, paragraphs 102-103, and 117-118).
As per claim 5, Enoki in view of Yoshida further discloses the endoscope system according to claim 3, wherein, in a case where the operation to the operation device connected to the endoscope system is performed, the processor sets an audio recognition dictionary according to the operation (Yoshida, paragraph 118).
As per claim 6, Enoki in view of Yoshida further discloses the endoscope system according to claim 3, wherein, in a case where the input of the wake word for the audio input device is performed, the processor sets an audio recognition dictionary according to contents of the wake word (Yoshida, paragraphs 117-118).
As per claim 7, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, the processor performs the image recognition for each of the specific subjects to be recognized (Yoshida, paragraphs 45, and 47-48, discloses the biopsy flag or the treatment flag may be created by voice recognition and analysis of, e.g., a doctor's voice obtained via a microphone).
As per claim 8, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein, in the audio recognition, the processor recognizes only registered words that are registered in the set audio recognition dictionary (Yoshida, paragraphs 117-118, discloses the clinician 51 says “biopsy” toward, e.g., the dictation microphone 34a, the dictation section 31a determines that the image that is being displayed is an image at the time of cell acquisition), and causes an output device to output a result of the audio recognition for the registered words (Yoshida, paragraphs 117-118).
As per claim 9, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein, in the audio recognition, the processor recognizes registered words that are registered in the set audio recognition dictionary and specific words (Yoshida, paragraphs 117-118, discloses the clinician 51 says “biopsy” toward, e.g., the dictation microphone 34a, the dictation section 31a determines that the image that is being displayed is an image at the time of cell acquisition), and causes an output device to output a result of the audio recognition for the registered words among the recognized words (Yoshida, paragraphs 45, and 117-118).
As per claim 11, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein the processor records the medical image decided to include the specific subject, among the plurality of medical images, a determination result using the image recognition for the specific subject (Yoshida, paragraphs 31, 45-48), and a result of the audio recognition in a recording device in association with each other (Yoshida, Fig. 1:14, and paragraph 45).
As per claim 12, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein the processor decides at least one of a lesion, a lesion candidate region, a landmark, a treated region, a treatment tool, or a hemostat, as the specific subject. (Yoshida, paragraphs 47-48 and 102)
As per claim 16, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein the processor performs the audio recognition for site information, findings information, treatment information, (Yoshida, paragraphs 45, 51, 99, and 102) and hemostasis information (Yoshida, paragraph 45).
As per claim 17, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein the processor displays a result of the audio recognition on a display device (Enoki, paragraph 105).
As per claim 18, Enoki discloses a medical information processing apparatus (Enoki, paragraph 36) comprising:
For rest of claim limitations please see the analysis of claim 1.
As per claim 19, please see the analysis of claim 1.
As per claim 20, Enoki discloses a non-transitory, tangible recording medium which records thereon a computer readable code of a program for causing a processor of an endoscope system (Enoki, paragraphs 7 and 36), to implement functions comprising:
For rest of claim limitations please see the analysis of claim 1.
Claim(s) 10, is/are rejected under 35 U.S.C. 103 as being unpatentable over Enoki (US PGPUB 2019/0125319 A1) in view of Yoshida (US PGPUB 2017/0337683 A1) and further in view of Kashima (US PGPUB 2019/0180865 A1).
As per claim 10, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein the Enoki in view of Yoshida does not explicitly disclose processor performs the image recognition using an image recognizer configured by machine learning.
Kashima discloses processor performs the image recognition using an image recognizer configured by machine learning (Kashima, paragraph 98).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Enoki in view of Yoshida teachings by implementing an image recognition technique as taught by Kashima.
The motivation would be to provide a medical observation apparatus with improved efficiency (paragraph 43), as taught by Kashima.
Claim(s) 13-15, is/are rejected under 35 U.S.C. 103 as being unpatentable over Enoki (US PGPUB 2019/0125319 A1) in view of Yoshida (US PGPUB 2017/0337683 A1) and further in view of Takahashi (JP 2006-136385 A, English translation of the JP is attached and used for citation).
As per claim 13, Enoki in view of Yoshida further discloses the endoscope system according to claim 1, wherein the processor executes the Enoki in view of Yoshida does not explicitly disclose audio recognition using the set audio recognition dictionary during a period in which a predetermined condition is satisfied after the setting.
Takahashi discloses audio recognition using the set audio recognition dictionary during a period in which a predetermined condition is satisfied after the setting (Takahashi, paragraph 6).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Enoki in view of Yoshida teachings by implementing an audio recognition technique as taught by Takahashi.
The motivation would be to provide an improved endoscope apparatus (paragraph 6), as taught by Takahashi.
As per claim 14, Enoki in view of Yoshida in view of Takahashi further discloses the endoscope system according to claim 13, wherein the processor sets the period for each image recognizer that performs the image recognition (Yoshida, paragraphs 119-120 and 134).
As per claim 15, Enoki in view of Yoshida in view of Takahashi further discloses the endoscope system according to claim 13, wherein the processor displays a remaining time of the period on a screen of a display device (Yoshida, Fig. 1:11:18:19, and paragraphs 119-121 and 134).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED Z HAIDER whose telephone number is (571)270-5169. The examiner can normally be reached MONDAY-FRIDAY 9-5:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SAM K Ahn can be reached at 571-272-3044. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SYED HAIDER/Primary Examiner, Art Unit 2633