DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed December 10th, 2025 has been entered. Claims 1-18 and 21-26 have been amended. Claim 27 has been added. Claims 1-19, 21-27 are now pending in the application. The previous 35 USC 112 (b) rejection of claims 1,6, 13, 24 and 25 are withdrawn in light of Applicant's amendment.
Response to Arguments
Applicant's arguments filed 12/10/25 have been fully considered but they are not persuasive.
Applicant requests withdrawal of the rejection of Claim 6 under 35 U.S.C. 112(a) because the applicant believes that the current description of “a learned model” is sufficient for one of skill in the art to understand the invention being limited in claim 6, however, the examiner argues that this description is not sufficient and does not properly describe the aspects of the invention for one skilled in the art to understand the features recited in claim 6 since a “learned model” has no real structure and could be any AI neural network based on the brief description that is provided in paragraph 70. One skilled in the art would not be capable of confidently recognizing the invention that is being limited by the claims since the disclosure does not provide key details such as how the learned model works nor the specifics of the artificial intelligence being used.
Applicant’s arguments with respect to claim(s) 1-27 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claim 6 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. The learned model of machine learning disclosed in Claim 6 only has one brief description in paragraph 70 which insinuates an artificial intelligence (AI) neural network; however, the specification provides to details into how it works, what kind of AI, etc.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 27 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 27 recites “the first region in the subject is horizontally placed in the endoscope image of the first region” which is confusing and unclear as to the meets and bounds of the limitation and what is being limited by the claim language. Each of the first, second, and third regions are understood to be imaging regions, so it is indefinite as to how the imaging region is horizontally placed upon itself in an endoscope image and what exactly is being limited. The examiner suggests amending the claim language to more distinctly claim the invention as intended and to avoid multiple interpretations of the invention.
Examiner’s Comments
The present rejection(s) reference specific passages from cited prior art. However,
Applicant is advised that the rejections are based on the entirety of each cited prior art. That is,
each cited prior art reference “must be considered in its entirety”. (See MPEP 2141.02(VI))
Therefore, Applicant is advised to review all portions of the cited prior art if traversing a
rejection based on the cited prior art.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 7, 10-15, 17-19, 21-22, 24-26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okada (US 20200297200 A1) in view of Inoue (US 20160353970 A1).
Regarding Claim 1, Okada discloses
An endoscope system (medical observation apparatus 10, FIG. 1) comprising:
an endoscope (an imaging unit 110) configured to be into a subject and to capture endoscope images in the subject (par. 24 discloses imaging unit that captures a surgery site of the patient that is an observation object);
a moving device (supporting unit 120) configured to move the endoscope to three-dimensionally change a position and an orientation of the endoscope (FIG. 1, par. 47 disclose supporting unit supports the imaging unit and moves it in three dimensions);
a storage unit (storage unit 180);
and at least one processor (controller 140 + control unit 190 + rotation axis unit 170; par. 90 discloses control unit includes processor), configured to:
obtain, from the storage unit (par. 89 discloses control unit accesses storage unit which stores various types of information such as positional information and values detected by the encoder; par. 30 discloses encoders are provided to detect the rotation angles at the corresponding rotation axes):
first position information and first rotation angle information (rotation angle 212, FIG. 1) on a first region (first axis O1/ rotation axis unit 210, FIG. 1) in the subject (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 50 discloses first rotation axis unit rotates imaging unit around first axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions); and
second position information and second rotation angle information (rotation angle 222, FIG. 1) on a second region (second axis O2/ rotation axis unit 220, FIG. 1), different from the first region, in the subject (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 53 discloses second rotation axis unit rotates imaging unit around second axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions),
wherein the first rotation angle information defines a rotation angle of an endoscope image of the first region (par. 30 discloses rotation angles are detected by an encoder at each of the corresponding rotation axes), and
the second rotation angle information defines a rotation angle of an endoscope image of the second region (par. 30 discloses rotation angles are detected by an encoder at each of the corresponding rotation axes);
calculate third rotation angle information (rotation angle 232) on a third region (third axis O3/ rotation axis unit 230), different from the first region and the second region (depicted in FIG. 1), in the subject to be imaged by the endoscope (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 55 discloses third rotation axis unit rotates imaging unit around third axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions),
on a basis of the first position information, the first rotation angle information, the second position information, the second rotation angle information, and third position information on the third region (par. 61 discloses controller may calculate the rotation angle around the third axis with respect to the reference position; par. 89 discloses control unit is configured to access the storage unit so that the control unit may perform various calculations by using various types of information stored in the storage unit),
the third region being different from the first region and the second region (depicted in FIG. 1);
determine whether a current imaging region that is currently being imaged by the endoscope is included in the third region (par. 73 discloses controller may calculate current states of imaging unit, i.e. what is being imaged/ current imaging region); and
control a display device to display the rotated endoscope image of the third region (par. 35 discloses imaging unit is configured to transmit captured image information to a display device, par. 78 discloses display device displays field of view of imaging unit, i.e. any of the three imaging regions).
However, Okada does not disclose in response to determining that the current imaging region that is currently being imaged by the endoscope is included in the third region, rotate an endoscope image of the third region to adjust a vertical direction of the endoscope image of the third region on a basis of the third rotation angle information (par. 50 discloses direction of the image captured by the imaging unit can be adjusted; par. 55 discloses the third rotation axis unit rotates the imaging unit around the third axis to adjust the position of the imaging unit in the y-axis direction).
Inoue teaches an analogous endoscope system (10, FIG. 1) having an endoscope (1) that is inserted through a patient’s body cavity to capture endoscope images in the body [0047]. The endoscope system (10) is controlled by a control unit (7, i.e. controller/ processor) which comprises a position computation unit (71, i.e. current position determination means) which is adapted to compute the position of the imaging target (Pt, i.e. imaging region), a position storage unit (72, i.e. storage unit) is adapted to store the position of the imaging target (Pt), and first and second driving-amount computation units (73, 76) adapted to compute the driving amount for a driver unit that drives a field-of-view adjustment mechanism (1b) [FIG. 2; 0057]. The controller (7) is configured to obtain, from an image storage unit (74, i.e. storage unit), images taken by the imaging unit and then utilize a similarity computation unit (75) to compare previously stored images of the imaging region (i.e. position and rotation angle information of the imaging region) with current images taken by the imaging unit, the field-of-view adjustment mechanism (1b) then adjusts (i.e. rotates) the current imaging region [0058, 0076].
It would have been obvious to one of ordinary skill in the art at the effective filing date of
the invention to provide the endoscope system of Okada with the adjustment mechanism of Inoue in order to provide a processor capable of varying the orientation of the imaging unit and capture a desired imaging target [Inoue 0017], as well as, determine current image information and adjust the endoscope image to the desired imaging region and/or field of view of the surgeon during operation [0080-0082].
Regarding Claim 2, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1, wherein the at least one processor is configured to rotate the endoscope image of the third region by image processing (par. 91 discloses control unit includes, as its functions, the operation-mode controller; par. 92 discloses operation-mode controller performs image processing, i.e. image of any imaging region).
Regarding Claim 3, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1, wherein the moving device is configured to rotate the endoscope about an optical axis of the endoscope (par. 33 discloses rotation of supporting unit at each rotation axis; par. 47 discloses supporting unit supports the imaging unit, moves the imaging unit in three dimensions), and
wherein the at least one processor is configured to rotate the endoscope image of the third region by controlling the moving device to rotate the endoscope about the optical axis (par. 87 discloses operating unit drives the imaging unit and the supporting unit so as to move; par. 55 discloses the third rotation axis unit rotates the imaging unit around the third axis to adjust the imaging unit, i.e. adjust the image of the third imaging region).
Regarding Claim 4, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1, wherein the at least one processor is operable in a manual mode that permits a user to operate the endoscope (par. 77 discloses imaging unit may be moved in a manual operation of the operator), and
wherein in the manual mode, the at least one processor is configured to:
determine the first position information, the first rotation angle information, the second position information, and the second rotation angle information (par. 61 discloses controller may calculate the rotation angle around the third axis with respect to the reference position; par. 89 discloses control unit is configured to access the storage unit so that the control unit may perform various calculations by using various types of information stored in the storage unit); and
store the first position information, the first rotation angle information, the second position information, and the second rotation angle information in the storage unit (par. 89 discloses storage unit that stores various types of information such as positional information and values detected by the encoder; par. 30 discloses encoders are provided to detect the rotation angles at the corresponding rotation axes).
Regarding Claim 7, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 4, and Okada further discloses
The endoscope system according to claim 4, wherein the at least one processor is configured to:
determine the first position information on a basis of a position of the current imaging region at a time of reception of a first instruction by a user interface (operation-mode controller 191, FIG. 4);
determine the first rotation angle information on a basis of a rotation angle about an optical axis of the endoscope at the time of reception of the first instruction by the user interface;
determine the second position information on a basis of a position of the imaging region at a time of reception of a second instruction by the user interface; and
determine the second rotation angle information on a basis of a rotation angle about the optical axis of the endoscope at the time of reception of the second instruction by the user interface (par. 38 discloses instructions remotely input to the medical observation apparatus by using an input device such as a remote controller; par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit based on rotation angle values at corresponding rotation axes; par. 50 discloses rotation axis unit provided so as to rotate the imaging unit; par. 93 discloses instructions from operator enable the determined operation mode; FIG. 5,7 disclose that upon user instruction, positional information is determined; par. 87 discloses different observation modes where the imaging unit is at different positions).
Regarding Claim 10, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1, wherein the first position information and the second position information are stored in advance in the storage unit (par. 89 discloses positional information stored in storage unit),
the first position information and the second position information being determined on a basis of at least one examination image of the subject captured before a surgical operation (par. 138 discloses operator observes an image and determines set working distance based on treatment; par. 73 discloses set working distance based on calculated positions and postures of the imaging unit and supporting unit).
Regarding Claim 11, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1, wherein the at least one processor is configured to:
determine whether the current imaging region that is currently being imaged by the endoscope is included in the first region (par. 73 discloses controller may calculate current states of imaging unit, i.e. what is being imaged/ current imaging region);
determine whether the current imaging region that is currently being imaged by the endoscope is included in the second region (par. 73 discloses controller may calculate current states of imaging unit, i.e. what is being imaged/ current imaging region); and
Inoue further teaches
determine whether the current imaging region that is currently being imaged by the endoscope is included in the first region (par. 57 discloses a position computation unit, i.e. a current position determination means, which determines the position of an imaging target, i.e. imaging region, on the basis of information from angle and position sensors and stores the information to a storage unit; par. 58 discloses processor detects current images taken by imaging unit);
in response to determining that the current imaging region that is currently being imaged by the endoscope is included in the first region,
rotate the endoscope image of the first region on a basis of the first rotation angle information (par. 58 discloses controller obtains current images of the imaging region, from an image storage unit, and then utilize a similarity computation unit to compare previously stored images of the imaging region (i.e. position and rotation angle information of the imaging region) with the current images taken by the imaging unit, to determine the imaging region and then the field-of-view adjustment mechanism adjusts (i.e. rotates) the current imaging region);
determine whether the current imaging region that is currently being imaged by the endoscope is included in the second region (par. 57 discloses a position computation unit, i.e. a current position determination means, which determines the position of an imaging target, i.e. imaging region, on the basis of information from angle and position sensors and stores the information to a storage unit; par. 58 discloses processor detects current images taken by imaging unit); and
in response to determining that the current imaging region that is currently being imaged by the endoscope is included in the second region,
rotate the endoscope image of the second region on a basis of the second rotation angle information (par. 58 discloses controller obtains current images of the imaging region, from an image storage unit, and then utilize a similarity computation unit to compare previously stored images of the imaging region (i.e. position and rotation angle information of the imaging region) with the current images taken by the imaging unit, to determine the imaging region and then the field-of-view adjustment mechanism adjusts (i.e. rotates) the current imaging region).
Regarding Claim 12, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 11, and Okada further discloses
The endoscope system according to claim 11, wherein the at least one processor is configured to determine whether the current imaging region that is currently being imaged by the endoscope is included in one of the first region, the second region, and the third region on a basis of a position of the current imaging region, the first position information, and the second position information (par. 30 discloses based on position and posture of imaging unit as well as rotation angles at corresponding rotation axes, the controller drives imaging unit to match the observation point to what it was before movement, i.e. current imaging region, par. 73 discloses controller may calculate current states of imaging unit, i.e. what is being imaged/ current imaging region).
Regarding Claim 13, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, Okada further discloses
wherein the endoscope is supported by and moved by the moving device (FIG. 1, par. 47 disclose supporting unit supports the imaging unit and moves it in three dimensions);
wherein the first position information, the second position information, and the third position information each include a rotation angle of the endoscope (par. 30 discloses rotation angles are detected by an encoder at each of the corresponding rotation axes).
Inoue further teaches to pivot about a first pivot axis (trocar 2, FIG. 2) at a predetermined pivot point (fulcrum Pb) fixed to the subject (depicted in FIG. 2),
the endoscope pivots about the first pivot axis so as to move between the first region and the second region (par. 51-52 disclose trocar tilts at fulcrum relative to fixed patient with respect to a reference coordinate system, i.e. different regions), and
wherein the first position information, the second position information, and the third position information each include a rotation angle of the endoscope about the first pivot axis (par. 52 discloses tilt detection sensor detects a tilt/ pivot angle with respect to a coordinate system, i.e. at different positions).
It would have been obvious to one of ordinary skill in the art at the effective filing date of
the invention to provide the endoscope system of Okada with the trocar and trocar sensor of Inoue in order to tilt the endoscope, relative to a patient, to vary the orientation of the imaging unit and capture a desired image at different angles [Inoue 0052].
Regarding Claim 14, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 13, Okada further discloses wherein the endoscope is supported by and moved by the moving device (FIG. 1, par. 47 disclose supporting unit supports the imaging unit and moves it in three dimensions).
Inoue further teaches to pivot about a second pivot axis (second trocar 2’, FIG. 17A) at the predetermined pivot point (fulcrum Pb’),
the second pivot axis being orthogonal to the first pivot axis (depicted in FIG. 2, 17A), and
wherein the first position information, the second position information, and the third position information are each three-dimensional information and further include a rotation angle of the endoscope about the second pivot axis (FIG. 2 depicts three-dimensional coordinate system; par. 52 discloses tilt angle detected at each pivot angle with respect to a reference coordinate system).
Regarding Claim 15, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 13, and Okada further discloses
determine whether the current imaging region that is currently being imaged by the endoscope is included in the third region on a basis of the rotation angle calculated (par. 73 discloses controller may calculate current states of imaging unit, i.e. what is being imaged/ current imaging region, based on the rotation angle at each rotation axis).
Inoue further teaches wherein the moving device includes at least one joint (trocar 2) and at least one angle sensor (tilt angle detection sensor 51) configured to detect a rotation angle of the at least one joint (par. 52 discloses detection of tilt angle of trocar WRT a reference coordinate system), and wherein the at least one processor is configured to:
calculate a rotation angle of the endoscope about the first pivot axis on a basis of the rotation angle detected by the at least one angle sensor (par. 55 discloses movement and rotation may be detected on basis of detected tilt angle).
Regarding Claim 17, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1, wherein the at least one processor is configured to:
calculate a positional relationship between the third position information and the first and second position information; and
calculate the third rotation angle information on a basis of the positional relationship, the first rotation angle information, and the second rotation angle information (par. 55 discloses third axis is arranged substantially perpendicular to the first and second axes; par. 61 discloses controller may calculate the rotation angle around the third axis with respect to the reference position).
Regarding Claim 18, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 3, and Okada further discloses
The endoscope system according to claim 3, wherein the at least one processor is configured to:
determine whether the rotation angle of the endoscope about an optical axis of the endoscope reaches a critical angle of a predetermined rotatable range of the endoscope; and
in response to determining that the rotation angle reached the critical angle,
rotate the endoscope image of the third region by image processing (par. 50 discloses rotation of imaging unit around rotation axis, which is substantially similar to the optical axis, of the imaging unit, which rotates about the rotation axis, i.e. optical axis, so the direction of the image captured by the imaging unit can be adjusted).
Regarding Claim 19, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1, wherein the endoscope is a direct-view endoscope or an oblique endoscope (FIG. 2 depicts imaging unit capturing observation point with a forward view, i.e. direct-view).
Regarding Claim 21, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1, wherein the at least one processor is configured to:
determine whether the current imaging region that is currently being imaged by the endoscope is included in the first region (par. 73 discloses controller calculates current information);
in response to determining that the current imaging region that is currently being imaged by the endoscope is included in the first region,
update the first position information or the first rotation angle information to position information or rotation angle information on the current imaging region (par. 89 discloses positional information and working distances are stored and used for calculations; i.e. utilizing current information); and
determine whether the current imaging region that is currently being imaged by the endoscope is included in the second region (par. 73 discloses controller calculates current information); and
in response to determining that the current imaging region that is currently being imaged by the endoscope is included in the second region,
update the second position information or the second rotation angle information to the position information or the rotation angle information on the current imaging region (par. 89 discloses positional information and working distances are stored and used for calculations; i.e. utilizing current information).
Regarding Claim 22, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 4, and Okada further discloses
The endoscope system according to claim 4, wherein in the manual mode, the at least one processor is configured to:
calculate the third position information and the third rotation angle information; and
store the third position information and the third rotation angle information in the storage unit (par. 55 discloses imaging unit rotates around the third axis to adjust the position of the imaging unit; par. 33 discloses position and the posture of the imaging unit are adjustable due to the direct manual operation of the operator; par. 30 discloses direct relationship between changing position and posture of imaging unit and rotation angles at corresponding rotation axes).
Regarding Claim 24, Okada discloses
A controller (controller 140 + control unit 190 + rotation axis unit 170) comprising:
at least one processor (operating unit 173 + state detecting unit 171; par. 90 discloses control unit includes processor), wherein the at least one processor is configured to:
obtain, from the storage unit (par. 89 discloses control unit accesses storage unit which stores various types of information such as positional information and values detected by the encoder; par. 30 discloses encoders are provided to detect the rotation angles at the corresponding rotation axes):
first position information and first rotation angle information (rotation angle 212, FIG. 1) on a first region (first axis O1/ rotation axis unit 210, FIG. 1) in the subject (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 50 discloses first rotation axis unit rotates imaging unit around first axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions); and
second position information and second rotation angle information (rotation angle 222, FIG. 1) on a second region (second axis O2/ rotation axis unit 220, FIG. 1), different from the first region, in the subject (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 53 discloses second rotation axis unit rotates imaging unit around second axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions),
wherein the first rotation angle information defines a rotation angle of an endoscope image of the first region (par. 30 discloses rotation angles are detected by an encoder at each of the corresponding rotation axes), and
the second rotation angle information defines a rotation angle of an endoscope image of the second region (par. 30 discloses rotation angles are detected by an encoder at each of the corresponding rotation axes);
calculate third rotation angle information (rotation angle 232) on a third region (third axis O3/ rotation axis unit 230), different from the first region and the second region (depicted in FIG. 1), in the subject to be imaged by the endoscope (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 55 discloses third rotation axis unit rotates imaging unit around third axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions),
on a basis of the first position information, the first rotation angle information, the second position information, the second rotation angle information, and third position information on the third region (par. 61 discloses controller may calculate the rotation angle around the third axis with respect to the reference position; par. 89 discloses control unit is configured to access the storage unit so that the control unit may perform various calculations by using various types of information stored in the storage unit),
the third region being different from the first region and the second region (depicted in FIG. 1);
determine whether a current imaging region that is currently being imaged by the endoscope is included in the third region (par. 73 discloses controller may calculate current states of imaging unit, i.e. what is being imaged/ current imaging region, based on rotation angle information); and
control a display device to display the rotated endoscope image of the third region (par. 35 discloses imaging unit is configured to transmit captured image information to a display device, par. 78 discloses display device displays field of view of imaging unit, i.e. any of the three imaging regions).
However, Okada does not disclose in response to determining that the current imaging region that is currently being imaged by the endoscope is included in the third region, rotate an endoscope image of the third region to adjust a vertical direction of the endoscope image of the third region on a basis of the third rotation angle information (par. 50 discloses direction of the image captured by the imaging unit can be adjusted; par. 55 discloses the third rotation axis unit rotates the imaging unit around the third axis to adjust the position of the imaging unit in the y-axis direction).
Inoue teaches an analogous endoscope system (10, FIG. 1) having an endoscope (1) that is inserted through a patient’s body cavity to capture endoscope images in the body [0047]. The endoscope system (10) is controlled by a control unit (7, i.e. controller/ processor) which comprises a position computation unit (71, i.e. current position determination means) which is adapted to compute the position of the imaging target (Pt, i.e. imaging region), a position storage unit (72, i.e. storage unit) is adapted to store the position of the imaging target (Pt), and first and second driving-amount computation units (73, 76) adapted to compute the driving amount for a driver unit that drives a field-of-view adjustment mechanism (1b) [FIG. 2; 0057]. The controller (7) is configured to obtain, from an image storage unit (74, i.e. storage unit), images taken by the imaging unit and then utilize a similarity computation unit (75) to compare previously stored images of the imaging region (i.e. position and rotation angle information of the imaging region) with current images taken by the imaging unit, the field-of-view adjustment mechanism (1b) then adjusts (i.e. rotates) the current imaging region on a basis of the current imaging region [0058, 0076].
It would have been obvious to one of ordinary skill in the art at the effective filing date of
the invention to provide the endoscope system of Okada with the adjustment mechanism of Inoue in order to provide a processor capable of varying the orientation of the imaging unit and capture a desired imaging target [Inoue 0017], as well as, determine current image information and adjust the endoscope image to the desired imaging region and/or field of view of the surgeon during operation [0080-0082].
Regarding Claim 25, Okada discloses
A method comprising:
obtaining, from the storage unit (par. 89 discloses control unit accesses storage unit which stores various types of information such as positional information and values detected by the encoder; par. 30 discloses encoders are provided to detect the rotation angles at the corresponding rotation axes):
first position information and first rotation angle information (rotation angle 212, FIG. 1) on a first region (first axis O1/ rotation axis unit 210, FIG. 1) in the subject (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 50 discloses first rotation axis unit rotates imaging unit around first axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions); and
second position information and second rotation angle information (rotation angle 222, FIG. 1) on a second region (second axis O2/ rotation axis unit 220, FIG. 1), different from the first region, in the subject (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 53 discloses second rotation axis unit rotates imaging unit around second axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions),
wherein the first rotation angle information defines a rotation angle of an endoscope image of the first region (par. 30 discloses rotation angles are detected by an encoder at each of the corresponding rotation axes), and
the second rotation angle information defines a rotation angle of an endoscope image of the second region (par. 30 discloses rotation angles are detected by an encoder at each of the corresponding rotation axes);
calculating third rotation angle information (rotation angle 232) on a third region (third axis O3/ rotation axis unit 230), different from the first region and the second region (depicted in FIG. 1), in the subject to be imaged by the endoscope (par. 30 discloses controller may calculate the three-dimensional position and posture of the imaging unit, i.e. three positions; par. 49 discloses different rotation axes are located in different imaging positions, i.e. image different regions; par. 55 discloses third rotation axis unit rotates imaging unit around third axis to adjust what is being captured, i.e. imaging region being captured; par. 139 discloses the imaging unit may include one or three or more imaging elements, i.e. 3 imaging regions),
on a basis of the first position information, the first rotation angle information, the second position information, the second rotation angle information, and third position information on the third region (par. 61 discloses controller may calculate the rotation angle around the third axis with respect to the reference position; par. 89 discloses control unit is configured to access the storage unit so that the control unit may perform various calculations by using various types of information stored in the storage unit),
the third region being different from the first region and the second region (depicted in FIG. 1);
determining whether a current imaging region that is currently being imaged by the endoscope is included in the third region (par. 73 discloses controller may calculate current states of imaging unit, i.e. what is being imaged/ current imaging region, based on rotation angle information); and
controlling a display device to display the rotated endoscope image of the third region (par. 35 discloses imaging unit is configured to transmit captured image information to a display device, par. 78 discloses display device displays field of view of imaging unit, i.e. any of the three imaging regions).
However, Okada does not disclose in response to determining that the current imaging region that is currently being imaged by the endoscope is included in the third region, rotate an endoscope image of the third region to adjust a vertical direction of the endoscope image of the third region on a basis of the third rotation angle information (par. 50 discloses direction of the image captured by the imaging unit can be adjusted; par. 55 discloses the third rotation axis unit rotates the imaging unit around the third axis to adjust the position of the imaging unit in the y-axis direction).
Inoue teaches an analogous method (FIG. 5) with an endoscope system (10, FIG. 1) having an endoscope (1) that is inserted through a patient’s body cavity to capture endoscope images in the body [0047]. The endoscope system (10) is controlled by a control unit (7, i.e. controller/ processor) which comprises a position computation unit (71, i.e. current position determination means) which is adapted to compute the position of the imaging target (Pt, i.e. imaging region), a position storage unit (72, i.e. storage unit) is adapted to store the position of the imaging target (Pt), and first and second driving-amount computation units (73, 76) adapted to compute the driving amount for a driver unit that drives a field-of-view adjustment mechanism (1b) [FIG. 2; 0057]. The controller (7) is configured to obtain, from an image storage unit (74, i.e. storage unit), images taken by the imaging unit and then utilize a similarity computation unit (75) to compare previously stored images of the imaging region (i.e. position and rotation angle information of the imaging region) with current images taken by the imaging unit, the field-of-view adjustment mechanism (1b) then adjusts (i.e. rotates) the current imaging region on a basis of the current imaging region [0058, 0076].
It would have been obvious to one of ordinary skill in the art at the effective filing date of
the invention to provide the endoscope system of Okada with the adjustment mechanism of Inoue in order to provide a processor capable of varying the orientation of the imaging unit and capture a desired imaging target [Inoue 0017], as well as, determine current image information and adjust the endoscope image to the desired imaging region and/or field of view of the surgeon during operation [0080-0082].
Regarding Claim 26, Okada discloses
A non-transitory computer-readable recording medium in which a control program for causing a computer to perform the method according to claim 25 is stored (par. 90 discloses control unit carries out predetermined program; par. 89 discloses control unit accesses various types of information stored in a storage unit to perform operations).
Regarding Claim 27, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, and Okada further discloses
The endoscope system according to claim 1,
wherein the first rotation angle information defines a rotation angle where the first region in the subject is horizontally placed in the endoscope image of the first region (par. 50 discloses the first rotation axis substantially matches the optical axis of the imaging unit and the rotation axis unit causes the direction of the image captured by the imaging unit to be adjusted in relation to the first axis, i.e. first region capable of being horizontal in endoscope image based on direction of imaging unit; par. 30 discloses rotation angles detected at corresponding rotation axes).
Claim(s) 5-6, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okada (US 20200297200 A1) in view of Inoue (US 20160353970 A1), and further in view of Kuroda et al. (US 20220192777 A1, hereinafter Kuroda).
Regarding Claim 5, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 4, however, Okada, in view of Inoue, does not disclose wherein the at least one processor is configured to determine the first position information and the first rotation angle information on a basis of a first specific tissue included in the endoscope image of the first region; and determine the second position information and the second rotation angle information on a basis of a second specific tissue included in the endoscope image of the second region.
Kuroda teaches an analogous processor (control unit 20) of an endoscope system (endoscopic surgical system 5000). The processor (20) determines positional information by evaluating first and second operative field images (i.e. first and second regions) on a basis of images of organs (i.e. specific tissues) [FIG. 11, 0252-0253] which also disclose information pertaining to the motion of organs [0192].
It would have been obvious to one of ordinary skill in the art at the effective filing date of
the invention to provide the processor of Okada, modified by Inoue, with the operative field images of Kuroda in order to image the operative field in which the organ/ tissue is being treated and better grasp the whole environment [Kuroda 159-161].
Regarding Claim 6, Okada in view of Inoue, as previously modified by Kuroda, discloses all of the elements of the current invention disclosed in claim 5, Kuroda further teaches
wherein the storage unit stores a learned model of machine learning of a correspondence between an image including a specific tissue and a type of the specific tissue (par. 212 discloses model of learning stored in storage unit; par. 213 discloses model of learning is neural network; par. 233 discloses determination unit takes into account organ and organ type; par. 234 discloses learned model is stored with info from determination unit), and
wherein the at least one processor is configured to:
recognize the first specific tissue in the endoscope image of the first region (first operative field image IM11, FIG. 11A) and the second specific tissue in the endoscope image of the second region (second operative field image IM12, FIG. 11B) by using the learned model (par. 176-177 disclose first and second operative field images are images of different regions par. 252 discloses organ and type of organ disclosed in respective field image and learned by training model);
determine the first position information on a basis of the first specific tissue recognized in the endoscope image of the first region (par. 262-263 disclose control unit determines positional information based on recognition of organ);
determine the first rotation angle information on a basis of a rotation angle of the endoscope at a time the endoscope image of the first region including the first specific tissue is captured (par. 223 discloses learned model can take motion of each target into consideration; par. 332-333 disclose information pertaining to motion of medical tool and organs obtained from motion sensors in operative field environments, i.e. captured in respective operative field images);
determine the second position information on a basis of the second specific tissue recognized in the endoscope image of the second region (par. 262-263 disclose control unit determines positional information based on recognition of organ); and
determine the second rotation angle information on a basis of a rotation angle of the endoscope at a time the endoscope image of the second region including the second specific tissue is captured (par. 223 discloses learned model can take motion of each target into consideration; par. 332-333 disclose information pertaining to motion of medical tool and organs obtained from motion sensors in operative field environments, i.e. captured in respective operative field images).
It would have been obvious to one of ordinary skill in the art at the effective filing date of
the invention to provide the endoscope system of Okada, in view of Inoue, with the learned model/ neural network of Kuroda in order to autonomously control the position and posture of a camera [Kuroda 0151] and increase accuracy of determination results [Kuroda 0206].
Regarding Claim 16, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, however, Okada, in view of Inoue, does not disclose wherein the storage unit stores a database in which a type of a specific tissue and rotation angle information are associated with each other, and wherein the at least one processor is configured to: determine whether the specific tissue is included in the endoscope image of the third region; and in response to determining that the specific tissue is included in the endoscope image of the third region, obtain, from the database, the rotation angle information corresponding to the type of the specific tissue in the endoscope image of the third region.
Kuroda teaches an analogous processor (control unit 20 + image processing unit 21) and storage unit (storage unit 60) of an endoscope system (endoscopic surgical system 5000). The processor (20 + 21) includes a recognition unit (214, i.e. database) which recognizes organs, by their depth and shape (i.e., specific type) [0190], and motion of the organ [0192], and the processor (20 + 21) is capable of image processing based on the kind of object recognized in a surgical image [0102, 0344-0346].
It would have been obvious to one of ordinary skill in the art at the effective filing date of the invention to provide the endoscope system of Okada, as previously modified by Inoue, with the database of Kuroda in order to determine the display target region which is being imaged based on recognition results and identify organs/ tissues while navigating the body [Kuroda 0171 & 0186].
Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okada (US 20200297200 A1) in view of in view of Inoue (US 20160353970 A1) and further in view of Christian et al. (US 20220079415 A1, hereinafter Christian).
Regarding Claim 8, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 4, however, Okada does not disclose wherein the at least one processor is configured to store a first endoscope image and a second endoscope image in the storage unit, the first endoscope image serving as the endoscope image of the first region, and the second endoscope image serving as the endoscope image of the second region.
Christian teaches an analogous processor (processor 845) of an endoscope system (observation apparatus/ endoscope 120). The processor (845) stores first (141, 310) and second (142, 320) images through first and second recording devices which are in communication with memory (840, i.e. storage unit) [0200].
It would have been obvious to one of ordinary skill in the art at the effective filing date of
the invention to provide the endoscope system of Okada, in view of Inoue, with the visualization system/ image storage of Christian in order to observe an operation region with different observation planes [Christian 0009].
Regarding Claim 9, Okada in view of Inoue, as previously modified by Christian, discloses all of the elements of the current invention disclosed in claim 5, Christian further teaches wherein the at least one processor is configured to determine which one of the first region, the second region, and the third region includes the current imaging region on a basis of the first endoscope image and the second endoscope image (par. 201 discloses processor 845 is configured to transform the second image 870 relative to the first image 860 based on the orientation of the optical inspection tool 805 relative to the observation apparatus 815 or generally in space).
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Okada (US 20200297200 A1) in view of in view of Inoue (US 20160353970 A1) and further in view of Nishimura et al. (US 20220354347 A1, hereinafter Nishimura).
Regarding Claim 23, Okada, as previously modified by Inoue, discloses all of the elements of the current invention disclosed in claim 1, however, Okada does not disclose wherein the at least one processor is configured to: operate in an autonomous mode to autonomously control the moving device to move the endoscope; and calculate the third position information and the third rotation angle information during the autonomous mode.
Nishimura teaches an analogous endoscope (depicted in FIG. 1) that has an autonomous
moving device (robot arm A) controlled by a processor is capable of autonomously positioning an endoscope by moving the robot arm [0044].
It would have been obvious to one of ordinary skill in the art at the effective filing date of
the invention to provide the endoscope system of Okada, in view of Inoue, with the autonomous moving device of Nishimura in order to view and image the area which the operator desires to view and carry out tasks handsfree [Nishimura 0044].
Additionally, the processor of Okada would be capable of determining the third position
information and the third rotation angle information just like normal since the autonomous mode only effects the movement of the moving device.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDUL HADI ABBASI whose telephone number is (571)272-4076. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anhtuan Nguyen can be reached at (571) 272-4963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABDUL HADI ABBASI/Examiner, Art Unit 3795
/RYAN N HENDERSON/Primary Examiner, Art Unit 3795