Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This communication is in response to Application No. 18/561270, filed on 15-NOV-2023. Claims 1-16 are currently pending and have been examined. Claims 1-16 have been rejected as follows.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: "position/posture recognition unit..." of claims 1, 2, 3, 4, 5, 6, 7, 8, 10, 14; "a light emitting unit..." of claims 1, 9, 11, 12, 13; "control instruction unit..." of claims 1, 14; "light emission instruction unit..." of claims 9, 11, 12; "control apparatus..." of claims 13; "light source recognition..." of claims 10 and 11.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The applicant’s specification describes sufficient structure for the "position/posture recognition unit...", "a light emitting unit...", "control instruction unit...", "light emission instruction unit...", "control apparatus...", "light source recognition..." in Paragraph [121], “FIG. 20 is a hardware configuration diagram illustrating an example of the computer 1000 that implements the respective functions of the information processing apparatus 20, the endoscope control apparatus 40, and the light emitting apparatus 50”; Paragraph [131], “An information processing apparatus including: a position/posture recognition unit…a control instruction unit… a light source recognition unit… An information processing system including: a light emitting unit”. Applicant also discloses an algorithm for the computer implemented functions as described in Figures 12-14.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6, 10 and 12 (and therefore dependent claim 11) are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as failing to set forth the subject matter which the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the applicant regards as the invention.
The term “greatly” in claim 6 is a relative term which renders the claim indefinite. The term “greatly” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim 10 recites the limitation "each of the surgical tools". There is insufficient antecedent basis for this limitation in the claim, as the surgical tool is singular in preceding claims, but is referred to in the plural for this limitation. Claim 11 and 12 are rejected for dependence on claim 10.
Claim 12 recites the limitation "the light emitting units". There is insufficient antecedent basis for this limitation in the claim, as the light emitting unit is singular in preceding claims, but is referred to in the plural for this limitation.
Claim Rejections - 35 USC § 101
Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
101 Analysis – Step 1
Claims 1-12 are directed to an information processing apparatus for controlling a robot arm (i.e., a machine). Claims 13-15 are directed to an information processing system for controlling a robot arm (i.e., a machine). Claim 16 are directed to a method for controlling a robot arm (i.e., a process). Therefore, claims 1-16 are within at least one of the four statutory categories.
101 Analysis – Step 2A, Prong I
Regarding Prong I of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether they recite subject matter that falls within one of the follow groups of abstract ideas: a) mathematical concepts, b) certain methods of organizing human activity, and/or c) mental processes. Independent claim 1 includes limitations that recite an abstract idea (emphasized below) and will be used as a representative claim for the remainder of the 101 rejection. Claim 1 recites:
An information processing apparatus including: a position/posture recognition unit configured to recognize a position and a posture of a surgical tool
on a basis of event data input from an event-based vision sensor (EVS) including a plurality of pixels that detects a change in luminance of light from a light emitting unit provided in the surgical tool as an event; and
a control instruction unit configured to generate control information for controlling a robot arm that supports a medical device including the EVS on a basis of the recognized position and posture of the surgical tool
The examiner submits that the foregoing bolded limitation(s) constitute a “recognize a position and posture of a surgical tool” and “generate control information…”, because under its broadest reasonable interpretation, the claim covers performance of the limitation in the human mind. For example, “recognize …” and “generate …”, in the context of this claim encompasses a person assessing the orientation of the tool, and deciding how to control the robotic arm. It is noted that generating information does not mean controlling the robot. Accordingly, the claim recites at least 2 abstract ideas.
101 Analysis – Step 2A, Prong II
Regarding Prong II of the Step 2A analysis in the 2019 PEG, the claims are to be analyzed to determine whether the claim, as a whole, integrates the abstract idea into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial exception to a particular technological environment or field of use do not integrate a judicial exception into a “practical application.”. In the present case, the additional limitations beyond the above-noted abstract idea are as follows (where the underlined portions are the “additional limitations” while the bolded portions continue to represent the “abstract idea”):
An information processing apparatus including: a position/posture recognition unit configured to recognize a position and a posture of a surgical tool
on a basis of event data input from an event-based vision sensor (EVS) including a plurality of pixels that detects a change in luminance of light from a light emitting unit provided in the surgical tool as an event; and
a control instruction unit configured to generate control information for controlling a robot arm that supports a medical device including the EVS on a basis of the recognized position and posture of the surgical tool
For the following reason(s), the examiner submits that the above identified additional limitations do not integrate the above-noted abstract idea into a practical application.
Regarding the additional limitations of “information processing apparatus…”, “recognition unit”, “event data input from an event-based vision sensor” and “control instruction unit…”, the examiner submits that these limitations are merely using a computer to implement an abstract idea and insignificant extra-solution activities as they are broad enough to include the pre-solution activity gathering data. In particular, the “event data input from an event-based vision sensor” step is recited at a high level of generality (i.e. as a general inputting of sensor data), and amounts to mere data gathering which is a form of insignificant extra-solution activity. Additionally, the limitations “information processing apparatus…”, “recognition unit” and “control instruction unit…” amounts to merely using a computer to implement an abstract idea. MPEP 2106.05(d)(II), and the cases cited therein, including Voter Verified, Inc. v. Election Systems & Software, LLC, 887 F.3d 1376, 1385, 126 USPQ2d 1498, 1504 (Fed. Cir. 2018), indicate that performing a mental process on a generic computer is still considered to recite a mental process. Thus, taken alone, the additional elements do not integrate the abstract idea into a practical application.
101 Analysis – Step 2B
Regarding Step 2B of the 2019 PEG, representative independent claim 1 does not include additional elements (considered both individually and as an ordered combination) that are sufficient to amount to significantly more than the judicial exception for the same reasons to those discussed above with respect to determining that the claim does not integrate the abstract idea into a practical application. As discussed above, the additional limitations of “information processing apparatus…”, “recognition unit”, “event data input from an event-based vision sensor” and “control instruction unit…” the examiner submits that these limitations are merely using a computer to implement an abstract idea and insignificant extra-solution activities. Further, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if they are more than what is well- understood, routine, conventional activity in the field. The additional limitations “event data input from an event-based vision sensor …” are well-understood, routine, and conventional activities as is merely the collection of data. MPEP 2106.04(a)(III)(C.), and the cases cited therein, including Intellectual Ventures I, LLC v. Symantec Corp., 838 F.3d 1307, 1321 (Fed. Cir. 2016), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610 (Fed. Cir. 2016), and OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015), indicate that mere collection or receipt of data over a network is a well‐understood, routine, and conventional function when it is claimed in a merely generic manner. Hence, the claim is not patent eligible.
Dependent claim(s) 2-12 do not recite any further limitations that cause the claim(s) to be patent eligible. Rather, the limitations of dependent claims are directed toward additional aspects of the judicial exception and/or well-understood, routine and conventional additional elements that do not integrate the judicial exception into a practical application as none of the dependent claims narrow the scope to not encompass performance of the limitations in the human mind. Therefore, dependent claims 2-12 are not patent eligible under the same rationale as provided for in the rejection of claim 1. Similarly, claim 13 and 16 are rejected under the same rationale provided for the rejection of claim 1, and dependent claims 13 and 14 are not patent eligible.
Therefore, claims 1-16 are ineligible under 35 USC §101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 8, 10, 11, 12, 13, 14, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over (US 2019/0083180 A1) Ichiki in view of (US 2019/0279379 A1) Srinivasan et al.
Regarding claim 1, Ichiki teaches: An information processing apparatus including: a position/posture recognition unit configured to recognize a position and a posture of a surgical tool (Paragraph [227], "According to the present technology, it is possible to detect the position of the distal end of the surgical tool 30 with high accuracy.") on the basis of sensor data (element 23, 27) including a plurality of pixels that detects a change in luminance of light (Figure 15; Paragraph [92], "The observation light transmitted through the lens unit 25 is focused on the light receiving surface of the imaging element, so as to be photoelectrically converted to generate an image signal corresponding to the observation image") from a light emitting unit provided in the surgical tool as an event (Paragraph [185], “In step S201, the luminescent marker 201 is turned on. For example, the luminescent marker 201 is turned on by predetermined operation, for example, operation of a button for lighting the luminescent marker 201 when a practitioner 71 wishes to know where in the image the distal end portion of the surgical tool 30 is located”); and a control instruction unit configured to generate control information for controlling a robot arm (Paragraph [77], "This configuration can achieve control of the position and posture of the endoscope 20. At this time, the arm control apparatus 57 can control the driving of the arm portion 43 by various known control methods such as force control or position control.") that supports a medical device including the sensor on a basis of the recognized position and posture of the surgical tool (Figure 1, Paragraph [102])
While Ichiki teaches the limitations as stated above, it does not expressly teach:
on a basis of event data input from an event-based vision sensor (EVS)
However, Srinivasan et al. teaches: on a basis of event data input from an event-based vision sensor (EVS) (element 110, Paragraph [3], “medical imaging”; Paragraph [56], “Referring to FIG. 1A, the electronic device 100 may be, but is not limited to… a mobile robot …or any other electronic device including an image capturing device. The electronic device 100 includes a monocular event based sensor 110”; Paragraph [61], “The monocular event based sensor 110 may obtain scene data while capturing an image of the scene using a capturing device, such as a camera of the electronic device 100”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool of Ichiki to include use an EVS for medical imaging as taught by Srinivasan et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus for imaging a surgical tool using an EVS.
Regarding claim 2, Ichiki further teaches: The information processing apparatus according to claim 1 wherein the position/posture recognition unit recognizes the position and the posture of the surgical tool on a basis of frame data (Paragraph [61, 136]) generated from event data input within a predetermined frame period (Paragraph [84-85]).
Regarding claim 8, Ichiki further teaches: The information processing apparatus according to claim 1, including the image capturing device and EVS on the robot.
Ichiki further teaches: wherein the position/posture recognition unit recognizes the position and the posture of the surgical tool further on a basis of image data input from an image sensor (Figure 1; element 23)
While Ichiki teaches the limitations as stated above, it does not expressly teach:
that shares an optical axis with the EVS
However, Srinivasan et al. teaches: that shares an optical axis with the EVS (Figure 1A; element 100, 110, Paragraph [61], “The monocular event based sensor 110 may obtain scene data while capturing an image of the scene using a capturing device, such as a camera of the electronic device 100”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool using an EVS of Ichiki and Srinivasan et al. to include the imaging from the same device as taught by Srinivasan et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus for imaging a surgical tool using an EVS and image sensor from the same device.
Regarding claim 10, Ichiki further teaches: The information processing apparatus according to claim 1, further including: a light source recognition unit configured to individually specify the surgical tool or a type of the surgical tool on a basis of the event detected in each of the pixels (Paragraph [110], "For example, the control unit 85 detects the shape, color, or the like of the edge of the object included in the surgical site image, making it possible to recognize a surgical tool such as forceps, a specific body site, bleeding, a mist at the time of using the energy treatment tool 33, or the like"), wherein the position/posture recognition unit recognizes the position and the posture for each of the surgical tools (Paragraph [61, 136]).
Regarding claim 11, Ichiki further teaches: The information processing apparatus according to claim 10, further including: a light emission instruction unit configured to generate a light emission control signal for controlling a light emission pattern of the light emitting unit (Paragraph [185], “In step S201, the luminescent marker 201 is turned on. For example, the luminescent marker 201 is turned on by predetermined operation, for example, operation of a button for lighting the luminescent marker 201 when a practitioner 71 wishes to know where in the image the distal end portion of the surgical tool 30 is located”), wherein the light source recognition unit specifies a light emission pattern of the light emitting unit on a basis of the event detected in each of the pixels (Figure 11) and individually specifies the surgical tool or a type of the surgical tool on a basis of the specified light emission pattern (Paragraph [110], "For example, the control unit 85 detects the shape, color, or the like of the edge of the object included in the surgical site image, making it possible to recognize a surgical tool such as forceps, a specific body site, bleeding, a mist at the time of using the energy treatment tool 33, or the like")
Regarding claim 12, Ichiki further teaches: The information processing apparatus according to claim 11 wherein the light emission instruction unit generates the light emission control signal (Paragraph [185]) such that the light emitting units provided in different surgical tools or different types of surgical tools emit light in different light emission patterns (Paragraph [110], "For example, the control unit 85 detects the shape, color, or the like of the edge of the object included in the surgical site image, making it possible to recognize a surgical tool such as forceps, a specific body site, bleeding, a mist at the time of using the energy treatment tool 33, or the like")
Regarding claim 13, Ichiki teaches: An information processing system including: a light emitting unit provided in a surgical tool (element 201); a medical device (element 20) including a plurality of pixels that detects a change in luminance of light (Figure 15; Paragraph [92], "The observation light transmitted through the lens unit 25 is focused on the light receiving surface of the imaging element, so as to be photoelectrically converted to generate an image signal corresponding to the observation image") from the light emitting unit as an event (Paragraph [185], “In step S201, the luminescent marker 201 is turned on. For example, the luminescent marker 201 is turned on by predetermined operation, for example, operation of a button for lighting the luminescent marker 201 when a practitioner 71 wishes to know where in the image the distal end portion of the surgical tool 30 is located”); an information processing apparatus configured to generates control information for controlling a robot arm that supports the medical device including the sensor on a basis of event data (Figure 1; Paragraph [102]); and a control apparatus configured to control the robot arm on a basis of the control information (Paragraph [77], "This configuration can achieve control of the position and posture of the endoscope 20. At this time, the arm control apparatus 57 can control the driving of the arm portion 43 by various known control methods such as force control or position control.").
While Ichiki teaches the limitations as stated above, it does not expressly teach:
including an EVS… on the basis of events data input from the EVS
However, Srinivasan et al. teaches: including an EVS… on the basis of events data input from the EVS (element 110, Paragraph [56], “Referring to FIG. 1A, the electronic device 100 may be, but is not limited to… a mobile robot …or any other electronic device including an image capturing device. The electronic device 100 includes a monocular event based sensor 110”; Paragraph [61], “The monocular event based sensor 110 may obtain scene data while capturing an image of the scene using a capturing device, such as a camera of the electronic device 100”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool of Ichiki to include use an EVS for medical imaging as taught by Srinivasan et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus for imaging a surgical tool using an EVS.
Regarding claim 14, Ichiki further teaches: The information processing system according to claim 13 wherein the information processing apparatus includes: a position/posture recognition unit configured to recognize a position and a posture of the surgical tool on a basis of the data (Paragraph [227], "According to the present technology, it is possible to detect the position of the distal end of the surgical tool 30 with high accuracy."); and a control instruction unit configured to generate control information for controlling the robot arm (Paragraph [77], "This configuration can achieve control of the position and posture of the endoscope 20. At this time, the arm control apparatus 57 can control the driving of the arm portion 43 by various known control methods such as force control or position control.") that supports the medical device (Figure 1) on a basis of the recognized position and posture of the surgical tool (Paragraph [77]).
While Ichiki teaches the limitations as stated above, it does not expressly teach:
on the basis of events data input from the EVS… including the EVS
However, Srinivasan et al. teaches: on the basis of events data input from the EVS… including the EVS (element 110, Paragraph [56], “Referring to FIG. 1A, the electronic device 100 may be, but is not limited to… a mobile robot …or any other electronic device including an image capturing device. The electronic device 100 includes a monocular event based sensor 110”; Paragraph [61], “The monocular event based sensor 110 may obtain scene data while capturing an image of the scene using a capturing device, such as a camera of the electronic device 100”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool with an EVS to control the arm of Ichiki and Srinivasan et al. to include utilizing the EVS data as taught by Srinivasan et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus for imaging a surgical tool with an EVS and using the data to control a robotic arm.
Regarding claim 15, Ichiki further teaches: The information processing system according to claim 13 wherein the medical device is an endoscope or a surgical microscope (Paragraph [52])
Regarding claim 16, Ichiki further teaches: An information processing method including: recognizing a position and a posture of a surgical tool (Paragraph [227], "According to the present technology, it is possible to detect the position of the distal end of the surgical tool 30 with high accuracy.") on the basis of sensor data (element 23, 27) including a plurality of pixels that detects a change in luminance (Figure 15; Paragraph [92], "The observation light transmitted through the lens unit 25 is focused on the light receiving surface of the imaging element, so as to be photoelectrically converted to generate an image signal corresponding to the observation image") of light from a light emitting unit provided in the surgical tool as an event (Paragraph [185], “In step S201, the luminescent marker 201 is turned on. For example, the luminescent marker 201 is turned on by predetermined operation, for example, operation of a button for lighting the luminescent marker 201 when a practitioner 71 wishes to know where in the image the distal end portion of the surgical tool 30 is located”); and generating control information for controlling a robot arm (Paragraph [77], "This configuration can achieve control of the position and posture of the endoscope 20. At this time, the arm control apparatus 57 can control the driving of the arm portion 43 by various known control methods such as force control or position control.") that supports a medical device including the sensor on a basis of the recognized position and posture of the surgical tool (Figure 1, Paragraph [102])
While Ichiki teaches the limitations as stated above, it does not expressly teach:
on a basis of event data input from an event-based vision sensor (EVS)
However, Srinivasan et al. teaches: on a basis of event data input from an event-based vision sensor (EVS) (element 110, Paragraph [56], “Referring to FIG. 1A, the electronic device 100 may be, but is not limited to… a mobile robot …or any other electronic device including an image capturing device. The electronic device 100 includes a monocular event based sensor 110”; Paragraph [61], “The monocular event based sensor 110 may obtain scene data while capturing an image of the scene using a capturing device, such as a camera of the electronic device 100”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool of Ichiki to include use an EVS for medical imaging as taught by Srinivasan et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus for imaging a surgical tool using an EVS.
Claims 3, 4, 5, and 6 are rejected under 35 U.S.C. 103 as being unpatentable over (US 2019/0083180 A1) Ichiki in view of (US 2019/0279379 A1) Srinivasan et al. in further view of (US 2011/0178392 A1) Kuhara et al.
Regarding claim 3, Ichiki further teaches: The information processing apparatus according to claim 2 wherein the position/posture recognition unit recognizes the position and the posture of the surgical tool on a basis of the event data (Paragraph [61, 136])
While Ichiki in view of Srinivasan et al. teaches the limitations as stated above, it does not expressly teach:
within a search range set on a basis of frame data of a previous frame
However, Kuhara et al. teaches: within a search range set on a basis of frame data of a previous frame (Paragraph [49], "an ROI setting unit 10n initially sets a local region (ROI) on the first slice image...The apparatus then sequentially searches for a coronary artery region within a search range on the slice image of the next frame by using this local region (ROI) as a visual field")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool with an EVS of Ichiki and Srinivasan et al. to include using an ROI in the current image for searching the next frame as taught by Kuhara et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus using an EVS for imaging a surgical tool and using an ROI in the current image for searching a next frame.
Regarding claim 4, while Ichiki in view of Srinivasan et al. teaches the limitations as stated above according to claim 3, including the constant measurement of the surgical tool region recognition results, it does not expressly teach:
wherein the position/posture recognition unit sets the search range of an event for frame data of a next frame
using a region where the event has occurred in frame data of a current frame as a reference
However, Kuhara et al. teaches: wherein the position/posture recognition unit sets the search range of an event for frame data of a next frame (Paragraph [49], "an ROI setting unit 10n initially sets a local region (ROI) on the first slice image"), using a region where the event has occurred in frame data of a current frame as a reference (Paragraph [49], "The apparatus then sequentially searches for a coronary artery region within a search range on the slice image of the next frame by using this local region (ROI) as a visual field")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool with an EVS of Ichiki and Srinivasan et al. and Kuhara et al. to include using an ROI in the current image based on an object of interest for searching the next frame as taught by Kuhara et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus using an EVS for imaging a surgical tool and using an ROI in the current image based on an object of interest for searching a next frame.
Regarding claim 5, while Ichiki further teaches: wherein the position/posture recognition unit sets the search range on the basis of movement information of the robot arm (Figure 22)
While Ichiki in view of Srinivasan et al. teaches the limitations as stated above, it does not expressly teach:
the position/posture recognition unit sets the search range further on a basis of at least one of optical flow of the entire frame data of the current frame
However, Kuhara et al. teaches: The information processing apparatus according to claim 4 wherein the position/posture recognition unit sets the search range (Figure 5; element S17) further on a basis of at least one of optical flow of the entire frame data of the current frame (Figure 5; element S18; Paragraph [49], “The apparatus sequentially replaces the detected ROI position with the new position of the coronary artery, that is, sequentially tracks the position of the coronary artery, based on the displacement obtained in this manner”)
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool with an EVS of Ichiki, Srinivasan et al., and Kuhara et al. to include using tracking movement of the objects and replacing the new ROI as taught by Kuhara et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus for imaging a surgical tool with an EVS and tracking movement of the objects and replacing the new search ROI based on the position.
Regarding claim 6, Ichiki teaches: The information processing apparatus according to claim 3 wherein, in a case where an angle of view of the EVS greatly moves, the position/posture recognition unit cancels setting of the search range (Figure 22; Paragraph [232], "Step S402 executes processing of changing the recognized shape (surgical tool region recognition result) and the position, direction, and operation state of the shape model, comparing, and calculating the matching degree at every occasion of comparison"; Paragraph [236], "In addition, since the position of the surgical tool 30 can be constantly measured during the operation") and recognizes the position and the posture of the surgical tool on a basis of the entire frame data (Paragraph [61, 136])
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over (US 2019/0083180 A1) Ichiki in view of (US 2019/0279379 A1) Srinivasan et al. in further view of (US 2022/0160433 A1) Rafii-Tari et al.
Regarding claim 7, while Ichiki in view of Srinivasan et al. teaches the limitations as stated above according to claim 1, it does not expressly teach:
wherein the position/posture recognition unit recognizes the position and the posture of the surgical tool by machine learning using the event data as an input
However, Rafii-Tari et al. teaches: The information processing apparatus according to claim 1 wherein the position/posture recognition unit recognizes the position and the posture of the surgical tool by machine learning using the event data as an input (Figure 2; Paragraph [24])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool with an EVS of Ichiki and Srinivasan et al. to include using machine learning to detect the surgical tools as taught by Rafii-Tari et al. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: modify a medical image apparatus for imaging a surgical tool and using machine learning to detect the surgical tools in the frames.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over (US 2019/0083180 A1) Ichiki in view of (US 2019/0279379 A1) Srinivasan et al. in further view of (US 2016/0073865 A1) Takahashi.
Regarding claim 9, Ichiki further teaches: The information processing apparatus according to claim 8, further including: a light emission instruction unit configured to generate a light emission control signal for controlling a light emission period of the light emitting unit (Paragraph [68]) wherein light from the light emitting unit is light in a visible light range (Paragraph [83])
While Ichiki in view of Srinivasan et al. teaches the limitations as stated above, it does not expressly teach:
the light emission instruction unit generates the light emission control signal for causing the light emitting unit to emit light during a period that does not overlap with an exposure period of the image sensor
However, Takahashi teaches: the light emission instruction unit generates the light emission control signal for causing the light emitting unit to emit light during a period that does not overlap with an exposure period of the image sensor (Paragraph [18], "An image pickup system 101, as shown in FIG. 1, is provided in an image pickup apparatus of an endoscope or the like"; Paragraph [7], "and to set a second period in which exposure for obtaining images of different frames is performed on the respective lines of the image pickup device, as a light emission non-permission period")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify a medical image apparatus for imaging a surgical tool with an EVS of Ichiki and Srinivasan et al. to include the light emission non-permission period as taught by Takahashi. Such modification would have been obvious because such application would have been well within the level of skill of the person having ordinary skill in the art and would have yielded predictable results. The predictable results including: a medical image apparatus for imaging a surgical tool with an EVS including a light emission non-permission period for an exposure period for obtaining images.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALYSE TRAMANH TRAN whose telephone number is (703)756-5879. The examiner can normally be reached M-F 8:30am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at 571-272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.T.T./Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656