DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/12/2026 has been entered.
Response to Amendment
Claims 1-16 remain pending. Claims 1-3, 5-7, 9, 13, and 14 have been amended. Accordingly, they have been examined using an updated interpretation since they no longer invoke 35 U.S.C 112(f). Amendments have also added a claim informality (see Claim Objections) and a 112(a) issue (see Claim Rejections – 35 U.S.C 112(a)) that are further discussed below.
Claim Objections
Claim 11 objected to because of the following informalities:
“…a plurality of candidate operations that is executable…” (grammatically incorrect)
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 7-9 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
The newly amended claims now identify the “heat map” to be a “second heat map”. There is no support in the specification or drawings for there to be a second heatmap. Paragraph 8 of the Specification recites “The machine learning model estimates a target command for an arm supporting a medical instrument. Then, the control unit outputs determination basis information regarding the target command estimated by the machine learning model to an information presentation unit. The control unit outputs information regarding a gaze region observed at the time of estimating the target command and/or a recognized target portion. The control unit outputs a heat map image indicating a gaze region observed at the time of estimating the target command and/or a recognized target portion. The control unit outputs the heat map image generated on the basis of the Grad-Cam algorithm.”. Additionally paragraph 19 recites “Regarding the former case of the clarification of the determination basis, an image region regarding the determination basis can be displayed on a captured image of an endoscope in a heat map by Grad-Cam” and “The Grad-Cam can explicitly indicate which portion of the input image the neural network model has focused on to output the target command-related information by, for example, a method of displaying a heat map of an image region that is a basis.”. Paragraphs 0068 and paragraph 0073 recite “Grad-Cam is known as a technology for visualizing information that is a basis of determination of a deep- learned neural network model and displaying the information on a heat map. The attention information presentation unit 701 to which Grad-Cam is applied displays a heat map indicating which part of the image, which target object in the image, or the like is focused on to output or estimate the target command-related information from the input image, the motion information of the robot, and the sensor information of the robot.” and “Fig. 9 illustrates an image in which a part focused when the control system 700 outputs the target command-related information is displayed on a heat map, the part being presented by the attention information presentation unit 701 to which the Grad-Cam is applied. The monitor image illustrated in Fig. 8 is a captured image of the endoscope 110 or an operative field image displayed on the display device 149. Meanwhile, Fig. 9 is an image with a wide angle of view having a wider field of view than the operative field image of Fig. 8.”, respectfully. From these recitations a clear line can be drawn from the spec to the limitation in claim 1, wherein a heatmap is used as determination basis information. These recitations also include information present within the limitations of claims 7-9, but make no mention of a second heat map.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5-7, 9, 12-13, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Barral (US20180065248A1) in view of Lee (WO2020159276A1).
With respect to claim 1, Barral teaches a medical support system, comprising: a central processing unit (CPU) (“It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 550 or read only memory 520 and executed by processor 510. In embodiments, processor 510 may be a FPGA, CPU, GPU, or the like.” Paragraph 0053) configured to: recognize an operative field environment (“As another example, the classifiers analyze visual sensor data, such as endoscope data, to recognize anatomical structures in a field of view to determine if the expected anatomical structures are in view, recognize which medical tools are currently being used, whether a deviation in the medical procedure has occurred due to the view of unexpected anatomical structures, etc” paragraph 0040 lines 19-26);implement a machine learning model, wherein the machine learning model is configured to estimate, based on the recognized operative field environment, a first operation executable by the medical support system ("One or more medical tool(s) 260 may be selectively coupled with medical procedure system 240 based on, for example a selected procedure, and can include … autonomous tools (e.g., medical tools that act without medical professional intervention based on instruction of a trained ML medical procedure model)” paragraph 0031 lines 8-17); and output first determination basis information regarding the estimated first operation ("Each medical procedure system may also be coupled with one or more user interfaces ... display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope), provide haptic feedback (e.g., activate a visual, auditory, or sensory alarm), enable access to additional data (e.g., preoperative medical images, a patient's medical history, a patient vital information, etc.), provide additional visualizations (e.g., such as a table the patient is on, arms of robotic surgical equipment, etc.)" paragraph 0019 lines 8-17 and "... identification and tracking of steps of a medical procedure enable medical procedure system 140 to provide guidance and decision support to one or more medical professionals performing a medical procedure. This can include... a graphical display of a next step, generation of reminders of critical information ... a graphical display of critical information .... notification when an action deviates from an expected step of a predefined sequence of steps" paragraph 0022 lines 1-12) to a display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050), wherein the first determination basis information includes analysis information to improve the machine learning model (“…dynamically update a sequence of steps of a medical procedure given case-specific and/or action-specific information” paragraph 0021 lines 16-18 and “As discussed herein, the tracked operations provide the ML medical server with training data sets of real medical procedure data, including sensor data, outcomes, deviations, annotations, etc. By providing the medical procedure data, front end training sets (e.g., training data sets in data store 214) can be expanded so that classifiers of corresponding ML medical procedure models can be retrained, refined, expanded for new circumstances (e.g., patient characteristics, complications during a medical procedure, as well as other circumstances), etc.” paragraph 0042, first determination basis information being information from the actual operation performed (including outcomes, deviation from predefined steps, sensor data, etc.)).
Barral does not teach wherein the first determination basis information corresponds to a first heat map image that indicates analysis information to improve the machine learning model.
Lee teaches wherein the first determination basis information corresponds to a first heat map image (“The presence or absence of the bleeding module may highlight the portion of the bleeding area in the surgical image through the converted pixel value, and thereby segment the bleeding area to estimate the location of the corresponding area. In addition, the bleeding presence recognition module may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image based on the feature map and convert each pixel to a probability value. The bleeding presence recognition module may specify a bleeding area in the surgical image based on the probability value of each converted pixel. For example, the bleeding presence recognition module may determine a pixel area having a large probability value as a bleeding area.” Page 10 paragraph 5) that indicates analysis information (“The surgical analysis layer may grasp specific medical meanings or make specific medical judgments based on each surgical element recognized through the surgical element recognition layer. For example, the surgical analysis layer, using at least one surgical element, bleeding loss evaluation (Blood Loss Estimation), internal body damage detection (Anatomy Injury Detection; for example, organ damage detection), instrument misuse detection (Instrument Misuse Detection) , It is possible to grasp the medical meaning such as the optimal planning procedure (Optimal Planning Suggestion) or to determine the medical condition.” Page 16 paragraph 1).
Lee is analogous art to the claimed invention. Lee is directed to a surgical image analysis and recognition system (“Therefore, the present invention is a surgical analysis device, a surgical image analysis and recognition system, and a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model formed in parallel.” Page 2 lines 35-38). A person of ordinary skill in the art before the effective filing date of the claimed invention would have found it obvious to combine the surgical area views of Barral with the heat map of Lee with the expectation that doing so would lead to the quick obtaining of accurate recognition results for surgical elements in a recognition model (“a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model” page 2 lines 36-37), providing accurate surgical analysis results to medical staff (“it is possible to provide accurate surgical analysis results to the medical staff” page 5 lines 22-23).
With respect to claim 3, Barral and Lee teach the medical support system according to claim 1. Barral further teaches wherein the machine learning model is further configured to estimate a target command for an arm that supports a medical instrument (“One or more medical tool(s) 260 may be selectively coupled with medical procedure system 240 … robotically assisted tools…” paragraph 0031 lines 8-13 and “using one or more classifiers of the ML medical procedure model to identify the step, anatomical structure, tool characteristics at the current operation (processing block 310). … classifiers analyze … sensor data to determine when a tool is moving too fast, too slow, a cut is of an expected depth, a tool is moving in an unanticipated direction, etc. … recognize which medical tools are currently being used, whether a deviation in the medical procedure has occurred due to the view of unexpected anatomical structures, etc. Then, based on the results of the analysis, processing logic controls one or more processes of the medical procedure system during the medical procedure (processing block 312). … process controls can include …. enforcing control mechanisms on the medical tools, such as overriding commands of the medical professional, stopping movement of a tool based on speed, direction, acceleration, taking an autonomous action, requesting acknowledgement before a deviation from an expected operation occurs, etc. “paragraph 0040 lines 13-39), and the CPU (“It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 550 or read only memory 520 and executed by processor 510. In embodiments, processor 510 may be a FPGA, CPU, GPU, or the like.” Paragraph 0053) is further configured to output second determination basis information regarding the estimated target command (“Each medical procedure system may also be coupled with one or more user interfaces … display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope), provide haptic feedback (e.g., activate a visual, auditory, or sensory alarm), enable access to additional data (e.g., preoperative medical images, a patient's medical history, a patient vital information, etc.), provide additional visualizations (e.g., such as a table the patient is on, arms of robotic surgical equipment, etc.)” paragraph 0019 lines 8-17 And “… identification and tracking of steps of a medical procedure enable medical procedure system 140 to provide guidance and decision support to one or more medical professionals performing a medical procedure. This can include… a graphical display of a next step, generation of reminders of critical information … a graphical display of critical information” paragraph 0022 lines 1-9 and “These anatomical structures, such as the highlighted triangle of Calot 608, can be significant waypoints during the proper procession of a medical procedure.” Paragraph 0041 lines 12-15, second determination information being predefined steps of procedure and targeted body areas) to the display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050)
With respect to claim 5, Barral and Lee teach the medical support system according to claim 3. Barral further teaches wherein the machine learning model is further configured to recognize a target portion in the recognized operative field environment (“These anatomical structures, such as the highlighted triangle of Calot 608, can be significant waypoints during the proper procession of a medical procedure. FIG. 6B illustrates the same surgical console updated with a graphical user interface 652 that displays the recognized anatomical structure (e.g., triangle of Calot 656) which is highlighted 654 in the graphical user interface 652. This overlay on the field of view may be ephemeral to provide the surgeon with an indication that the preceding operation was performed successfully.” Paragraph 0041 lines 12-22), and the CPU (“It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 550 or read only memory 520 and executed by processor 510. In embodiments, processor 510 may be a FPGA, CPU, GPU, or the like.” Paragraph 0053) is further configured to output, to the display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050) information regarding a gaze region that is associated with at least one of the estimated target command or the recognized target portion (“Each medical procedure system may also be coupled with one or more user interfaces … display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope)” paragraph 0019 lines 8-12)
With respect to claim 6, Barral and Lee teach the medical support system according to claim 1. Barral further teaches an input device (“An alphanumeric input device 575, including alphanumeric and other keys, may also be coupled to bus 515 through bus 565 for communicating information and command selections to processor 510. An additional user input device is cursor control device 580, such as a mouse, a trackball, stylus, joystick, virtual reality controller, sensor feedback controller, or cursor direction keys coupled to bus 515 through bus 565 for communicating direction information and command selections to processor 510.” Paragraph 0050) configured to receive a user response on the output first determination basis information (“a graphical display of critical information (e.g., a visual illustration of structure XYZ), notification when an action deviates from an expected step of a predefined sequence of steps, requests confirmation before enabling an action of a medical tool (e.g., detects an attempted deviation of a step or entry into a zone of a patient's anatomy, stops the medical tool, and requests confirmation before allowing the action to proceed)” paragraph 0022 lines 9-16), wherein the CPU (“It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 550 or read only memory 520 and executed by processor 510. In embodiments, processor 510 may be a FPGA, CPU, GPU, or the like.” Paragraph 0053) is further configured to perform a relearning process of the machine learning model based on the received user response (“… ML medical procedure server 110 periodically refines existing ML medical procedure models based on training sets having data captured during medical procedures performed by one or more of the medical procedure systems, newly created positive/negative training sets, etc. …” paragraph lines 20-25 and “… the improved classifiers in the updated ML medical procedure models are pushed to one or more medical procedure systems ... As a result, the knowledge and experience of a potentially large group of medical professionals is collected, merged, and distilled into a knowledge base (e.g., the trained ML medical procedure models)” paragraph 0025 lines 13-19).
With respect to claim 7, Barral and Lee teach the medical support system according to claim 5. Barral further teaches wherein the CPU (It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 550 or read only memory 520 and executed by processor 510. In embodiments, processor 510 may be a FPGA, CPU, GPU, or the like.” Paragraph 0053) is further configured to output, to the display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050), images (“Each medical procedure system may also be coupled with one or more user interfaces, such as graphical user interfaces that display data gathered by the medical tools …, provide additional visualizations” paragraph 0019) indicating the gaze region (“an insert 604 can be displayed 606 based on the current operation to inform the surgeon what anatomical structures should be within the surgeon's field of view at the completion of the dissection (e.g., the triangle of Calot 608 surrounded by other anatomical structures such as the liver, arteries, ducts, etc.)” paragraph 0041 lines 7-12).
Lee teaches a second heat map image (“may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image” page 10 line 27).
Lee is analogous art to the claimed invention. Lee is directed to a surgical image analysis and recognition system (“Therefore, the present invention is a surgical analysis device, a surgical image analysis and recognition system, and a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model formed in parallel.” Page 2 lines 35-38). A person of ordinary skill in the art before the effective filing date of the claimed invention would have found it obvious to combine the surgical area views of Barral with the heat map of Lee with the expectation that doing so would lead to the quick obtaining of accurate recognition results for surgical elements in a recognition model (“a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model” page 2 lines 36-37), providing accurate surgical analysis results to medical staff (“it is possible to provide accurate surgical analysis results to the medical staff” page 5 lines 22-23).
With respect to claim 9, Barral and Lee teach the medical support system according to claim 7. Barral teaches wherein the CPU (“It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 550 or read only memory 520 and executed by processor 510. In embodiments, processor 510 may be a FPGA, CPU, GPU, or the like.” Paragraph 0053) is further configured to: output images (“an insert 604 can be displayed 606 based on the current operation to inform the surgeon what anatomical structures should be within the surgeon's field of view at the completion of the dissection (e.g., the triangle of Calot 608 surrounded by other anatomical structures such as the liver, arteries, ducts, etc.)” paragraph 0041 lines 7-12) to the display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050)
Lee teaches generating the second heat map image based on a Grad-Cam algorithm (“may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image” page 10 line 27).
With respect to claim 12, Barral and Lee teach the medical support system according to claim 3. Barral further teaches the display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050 and “Each medical procedure system may also be coupled with one or more user interfaces … display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope)” paragraph 0019 lines 8-12), wherein the medical instrument is an endoscope that captures an operative field image (“Each medical procedure system may also be coupled with one or more user interfaces … display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope)” paragraph 0019 lines 8-12), and the display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050) is configured to present at least one of the output first determination basis information (“Each medical procedure system may also be coupled with one or more user interfaces … display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope), provide haptic feedback (e.g., activate a visual, auditory, or sensory alarm), enable access to additional data (e.g., preoperative medical images, a patient's medical history, a patient vital information, etc.), provide additional visualizations (e.g., such as a table the patient is on, arms of robotic surgical equipment, etc.)” paragraph 0019 lines 8-17 and “…dynamically update a sequence of steps of a medical procedure given case-specific and/or action-specific information” paragraph 0021 lines 16-18 and “As discussed herein, the tracked operations provide the ML medical server with training data sets of real medical procedure data, including sensor data, outcomes, deviations, annotations, etc. By providing the medical procedure data, front end training sets (e.g., training data sets in data store 214) can be expanded so that classifiers of corresponding ML medical procedure models can be retrained, refined, expanded for new circumstances (e.g., patient characteristics, complications during a medical procedure, as well as other circumstances), etc.” paragraph 0042) or the output second determination basis information in a screen that displays the captured operative field image (“Each medical procedure system may also be coupled with one or more user interfaces … display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope), provide haptic feedback (e.g., activate a visual, auditory, or sensory alarm), enable access to additional data (e.g., preoperative medical images, a patient's medical history, a patient vital information, etc.), provide additional visualizations (e.g., such as a table the patient is on, arms of robotic surgical equipment, etc.)” paragraph 0019 lines 8-17 And “… identification and tracking of steps of a medical procedure enable medical procedure system 140 to provide guidance and decision support to one or more medical professionals performing a medical procedure. This can include… a graphical display of a next step, generation of reminders of critical information … a graphical display of critical information” paragraph 0022 lines 1-9 and “These anatomical structures, such as the highlighted triangle of Calot 608, can be significant waypoints during the proper procession of a medical procedure.” Paragraph 0041 lines 12-15, second determination information being predefined steps of procedure and targeted body areas).
With respect to claim 13, Barral and Lee teach the medical support system according to claim 1. Barral further teaches the display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050) configured to present ("Each medical procedure system may also be coupled with one or more user interfaces ... display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope), provide haptic feedback (e.g., activate a visual, auditory, or sensory alarm), enable access to additional data (e.g., preoperative medical images, a patient's medical history, a patient vital information, etc.), provide additional visualizations (e.g., such as a table the patient is on, arms of robotic surgical equipment, etc.)" paragraph 0019 lines 8-17 and "... identification and tracking of steps of a medical procedure enable medical procedure system 140 to provide guidance and decision support to one or more medical professionals performing a medical procedure. This can include... a graphical display of a next step, generation of reminders of critical information ... a graphical display of critical information .... notification when an action deviates from an expected step of a predefined sequence of steps" paragraph 0022 lines 1-12) the output first determination basis information (“…dynamically update a sequence of steps of a medical procedure given case-specific and/or action-specific information” paragraph 0021 lines 16-18 and “As discussed herein, the tracked operations provide the ML medical server with training data sets of real medical procedure data, including sensor data, outcomes, deviations, annotations, etc. By providing the medical procedure data, front end training sets (e.g., training data sets in data store 214) can be expanded so that classifiers of corresponding ML medical procedure models can be retrained, refined, expanded for new circumstances (e.g., patient characteristics, complications during a medical procedure, as well as other circumstances), etc.” paragraph 0042), and an input device (“An alphanumeric input device 575, including alphanumeric and other keys, may also be coupled to bus 515 through bus 565 for communicating information and command selections to processor 510. An additional user input device is cursor control device 580, such as a mouse, a trackball, stylus, joystick, virtual reality controller, sensor feedback controller, or cursor direction keys coupled to bus 515 through bus 565 for communicating direction information and command selections to processor 510.” Paragraph 0050) configured to receive, based on the presented first determination basis information, a user instruction (“ a graphical display of critical information (e.g., a visual illustration of structure XYZ), notification when an action deviates from an expected step of a predefined sequence of steps, requests confirmation before enabling an action of a medical tool (e.g., detects an attempted deviation of a step or entry into a zone of a patient's anatomy, stops the medical tool, and requests confirmation before allowing the action to proceed)” paragraph 0022 lines 9-16 and “In one embodiment, processing logic controls the medical procedure in view of this unexpected event by generating a request for input 680” paragraph 0041 lines 26-29), wherein the CPU (“It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 550 or read only memory 520 and executed by processor 510. In embodiments, processor 510 may be a FPGA, CPU, GPU, or the like.” Paragraph 0053) is further configured to correct the presented first determination basis information based on the received user instruction (“a graphical display of critical information (e.g., a visual illustration of structure XYZ), notification when an action deviates from an expected step of a predefined sequence of steps, requests confirmation before enabling an action of a medical tool (e.g., detects an attempted deviation of a step or entry into a zone of a patient's anatomy, stops the medical tool, and requests confirmation before allowing the action to proceed)”), and the machine learning model is further configured to estimate, based on the corrected first determination basis information, a second operation that is executable by the medical support system (“…dynamically update a sequence of steps of a medical procedure given case-specific and/or action-specific information” paragraph 0021 lines 16-18 and “As discussed herein, the tracked operations provide the ML medical server with training data sets of real medical procedure data, including sensor data, outcomes, deviations, annotations, etc. By providing the medical procedure data, front end training sets (e.g., training data sets in data store 214) can be expanded so that classifiers of corresponding ML medical procedure models can be retrained, refined, expanded for new circumstances (e.g., patient characteristics, complications during a medical procedure, as well as other circumstances), etc.” paragraph 0042).
With respect to claim 15, Barral teaches a medical support method comprising: recognizing an operative field environment (“As another example, the classifiers analyze visual sensor data, such as endoscope data, to recognize anatomical structures in a field of view to determine if the expected anatomical structures are in view, recognize which medical tools are currently being used, whether a deviation in the medical procedure has occurred due to the view of unexpected anatomical structures, etc” paragraph 0040 lines 19-26); estimating, by a machine learning model, based on the recognized operative field environment, an operation executable by a medical support system ("One or more medical tool(s) 260 may be selectively coupled with medical procedure system 240 based on, for example a selected procedure, and can include … autonomous tools (e.g., medical tools that act without medical professional intervention based on instruction of a trained ML medical procedure model)” paragraph 0031 lines 8-17); and outputting determination basis information regarding the estimated operation to an information presentation unit ("Each medical procedure system may also be coupled with one or more user interfaces ... display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope), provide haptic feedback (e.g., activate a visual, auditory, or sensory alarm), enable access to additional data (e.g., preoperative medical images, a patient's medical history, a patient vital information, etc.), provide additional visualizations (e.g., such as a table the patient is on, arms of robotic surgical equipment, etc.)" paragraph 0019 lines 8-17 and "... identification and tracking of steps of a medical procedure enable medical procedure system 140 to provide guidance and decision support to one or more medical professionals performing a medical procedure. This can include... a graphical display of a next step, generation of reminders of critical information ... a graphical display of critical information .... notification when an action deviates from an expected step of a predefined sequence of steps" paragraph 0022 lines 1-12) wherein the determination basis information includes analysis information to improve the machine learning model (“…dynamically update a sequence of steps of a medical procedure given case-specific and/or action-specific information” paragraph 0021 lines 16-18 and “As discussed herein, the tracked operations provide the ML medical server with training data sets of real medical procedure data, including sensor data, outcomes, deviations, annotations, etc. By providing the medical procedure data, front end training sets (e.g., training data sets in data store 214) can be expanded so that classifiers of corresponding ML medical procedure models can be retrained, refined, expanded for new circumstances (e.g., patient characteristics, complications during a medical procedure, as well as other circumstances), etc.” paragraph 0042).
Barral does not teach wherein the first determination basis information corresponds to a heat map image that indicates analysis information to improve the machine learning model.
Lee teaches wherein the first determination basis information corresponds to a first heat map image (“The presence or absence of the bleeding module may highlight the portion of the bleeding area in the surgical image through the converted pixel value, and thereby segment the bleeding area to estimate the location of the corresponding area. In addition, the bleeding presence recognition module may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image based on the feature map and convert each pixel to a probability value. The bleeding presence recognition module may specify a bleeding area in the surgical image based on the probability value of each converted pixel. For example, the bleeding presence recognition module may determine a pixel area having a large probability value as a bleeding area.” Page 10 Paragraph 5) that indicates analysis information (“The surgical analysis layer may grasp specific medical meanings or make specific medical judgments based on each surgical element recognized through the surgical element recognition layer. For example, the surgical analysis layer, using at least one surgical element, bleeding loss evaluation (Blood Loss Estimation), internal body damage detection (Anatomy Injury Detection; for example, organ damage detection), instrument misuse detection (Instrument Misuse Detection) , It is possible to grasp the medical meaning such as the optimal planning procedure (Optimal Planning Suggestion) or to determine the medical condition.” Page 16 paragraph 1).
Lee is analogous art to the claimed invention. Lee is directed to a surgical image analysis and recognition system (“Therefore, the present invention is a surgical analysis device, a surgical image analysis and recognition system, and a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model formed in parallel.” Page 2 lines 35-38). A person of ordinary skill in the art before the effective filing date of the claimed invention would have found it obvious to combine the surgical area views of Barral with the heat map of Lee with the expectation that doing so would lead to the quick obtaining of accurate recognition results for surgical elements in a recognition model (“a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model” page 2 lines 36-37), providing accurate surgical analysis results to medical staff (“it is possible to provide accurate surgical analysis results to the medical staff” page 5 lines 22-23).
With respect to claim 16, Barral teaches a non-transitory computer-readable medium (“The system further comprises a random access memory (RAM) or other volatile storage device 550 (referred to as memory), coupled to bus 515 for storing information and instructions to be executed by processor 510.” Paragraph 0049 lines 4-8) having stored thereon, computer-executable instructions which, when executed by a computer, cause the computer to execute operations (“The system further comprises a random access memory (RAM) or other volatile storage device 550 (referred to as memory), coupled to bus 515 for storing information and instructions to be executed by processor 510.” Paragraph 0049 lines 4-8), the operations comprising: recognizing an operative field environment (“As another example, the classifiers analyze visual sensor data, such as endoscope data, to recognize anatomical structures in a field of view to determine if the expected anatomical structures are in view, recognize which medical tools are currently being used, whether a deviation in the medical procedure has occurred due to the view of unexpected anatomical structures, etc” paragraph 0040 lines 19-26); estimating, by a machine learning model, based on the recognized operative field environment, an operation executable by a medical support system on a basis of a recognition result in the recognition step ("One or more medical tool(s) 260 may be selectively coupled with medical procedure system 240 based on, for example a selected procedure, and can include … autonomous tools (e.g., medical tools that act without medical professional intervention based on instruction of a trained ML medical procedure model)” paragraph 0031 lines 8-17); and outputting determination basis information regarding the estimated operation to an information presentation unit ("Each medical procedure system may also be coupled with one or more user interfaces ... display data gathered by the medical tools (e.g., a field of view of a patient's anatomical structure captured by an endoscope), provide haptic feedback (e.g., activate a visual, auditory, or sensory alarm), enable access to additional data (e.g., preoperative medical images, a patient's medical history, a patient vital information, etc.), provide additional visualizations (e.g., such as a table the patient is on, arms of robotic surgical equipment, etc.)" paragraph 0019 lines 8-17 and "... identification and tracking of steps of a medical procedure enable medical procedure system 140 to provide guidance and decision support to one or more medical professionals performing a medical procedure. This can include... a graphical display of a next step, generation of reminders of critical information ... a graphical display of critical information .... notification when an action deviates from an expected step of a predefined sequence of steps" paragraph 0022 lines 1-12), wherein the determination basis information includes analysis information to improve the machine learning model (“…dynamically update a sequence of steps of a medical procedure given case-specific and/or action-specific information” paragraph 0021 lines 16-18 and “As discussed herein, the tracked operations provide the ML medical server with training data sets of real medical procedure data, including sensor data, outcomes, deviations, annotations, etc. By providing the medical procedure data, front end training sets (e.g., training data sets in data store 214) can be expanded so that classifiers of corresponding ML medical procedure models can be retrained, refined, expanded for new circumstances (e.g., patient characteristics, complications during a medical procedure, as well as other circumstances), etc.” paragraph 0042).
Barral does not teach wherein the first determination basis information corresponds to a heat map image that indicates analysis information to improve the machine learning model.
Lee teaches wherein the first determination basis information corresponds to a first heat map image (“The presence or absence of the bleeding module may highlight the portion of the bleeding area in the surgical image through the converted pixel value, and thereby segment the bleeding area to estimate the location of the corresponding area. In addition, the bleeding presence recognition module may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image based on the feature map and convert each pixel to a probability value. The bleeding presence recognition module may specify a bleeding area in the surgical image based on the probability value of each converted pixel. For example, the bleeding presence recognition module may determine a pixel area having a large probability value as a bleeding area.” Page 10 paragraph 5) that indicates analysis information (“The surgical analysis layer may grasp specific medical meanings or make specific medical judgments based on each surgical element recognized through the surgical element recognition layer. For example, the surgical analysis layer, using at least one surgical element, bleeding loss evaluation (Blood Loss Estimation), internal body damage detection (Anatomy Injury Detection; for example, organ damage detection), instrument misuse detection (Instrument Misuse Detection) , It is possible to grasp the medical meaning such as the optimal planning procedure (Optimal Planning Suggestion) or to determine the medical condition.” Page 16 paragraph 1).
Lee is analogous art to the claimed invention. Lee is directed to a surgical image analysis and recognition system (“Therefore, the present invention is a surgical analysis device, a surgical image analysis and recognition system, and a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model formed in parallel.” Page 2 lines 35-38). A person of ordinary skill in the art before the effective filing date of the claimed invention would have found it obvious to combine the surgical area views of Barral with the heat map of Lee with the expectation that doing so would lead to the quick obtaining of accurate recognition results for surgical elements in a recognition model (“a method capable of quickly obtaining accurate recognition results by recognizing each of a plurality of surgical elements in a surgical image through a surgical element recognition model” page 2 lines 36-37), providing accurate surgical analysis results to medical staff (“it is possible to provide accurate surgical analysis results to the medical staff” page 5 lines 22-23).
Claims 2, 4, 10, 11, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Barral and Lee, as applied above, and further in view of Wolf (US10886015B2).
With respect to claim 2, Barral and Lee teach the medical support system according to claim 1. Barral further teaches outputting model information (“the selected ML medical procedure model can be displayed to a medical professional via user interface 270 to enable the medical professional to accept or reject the model, as well as to configure a model with specific patient attributes, configure a model with procedural options, etc” paragraph 0032 lines 11-15) to the display device (“The system may further be coupled to a display device 570, such as a liquid crystal display (LCD), a light emitting diode (LED) display, etc. coupled to bus 515 through bus 565 for displaying information to a computer user.” Paragraph 0050).
However, Barral and Lee do not teach the medical support system according to claim 1, wherein the CPU further calculates reliability regarding an estimation result of the machine learning model, and outputs the reliability to the display device.
Wolf teaches a CPU (“Consistent with disclosed embodiments, “at least one processor” may constitute any physical device or group of devices having electric circuitry that performs a logic operation on an input or inputs. For example, the at least one processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations.” Page 47 col 9 lines 41-42) configured to: calculate a reliability regarding the estimated first operation by the machine learning model (“In various embodiments, the event identifying string may be compared with a known historical name of a corresponding intraoperative event to evaluate an associated error for the model. If the error is below a predetermined threshold value, the model may be trained using other input data. Alternatively, if the error is above the threshold value, model parameters may be modified, and a training step may be repeated using the first training data” page 146 col. 196 lines 34-41);
Wolf is analogous art in the same field of endeavor as the claimed invention. It is directed to a system that provides support for surgeon decision making based on surgical video analysis (“Consistent with disclosed embodiments, systems, methods, and computer readable media related to surgical preparation are disclosed. The embodiments may include accessing a repository of a plurality of sets of surgical video footage reflecting a plurality of surgical procedures performed on differing patients and including intraoperative surgical events, surgical outcomes, patient characteristics, surgeon characteristics, and intraoperative surgical event characteristics.” Page 44 Col. 3 lines 21-29). Although, Barral does not explicitly disclose the reliability metric it uses in training its models, Wolf presents a reliability metric that a person of ordinary skill in the art, before the effective filing date of the claimed invention would have found obvious to take in combination with Barral and Lee’s combined medical procedure system (medical procedure models, model information display, and processing logic), with the expectation that doing so would enable a user to ascertain whether the model is ready for use or needs to go through further training processes (“If the error is below a predetermined threshold value, the model may be trained using other input data. Alternatively, if the error is above the threshold value, model parameters may be modified, and a training step may be repeated using the first training data” page 146 col. 196 lines 37-41”).
With respect to claim 4, Barral, Lee and Wolf teach the medical support system according to claim 2. Wolf further teaches wherein a CPU (“Consistent with disclosed embodiments, “at least one processor” may constitute any physical device or group of devices having electric circuitry that performs a logic operation on an input or inputs. For example, the at least one processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations.” Page 47 col 9 lines 41-42) is further configured to calculate the reliability based on a Bayesian deep learning process (“… a machine learning model may be trained using training examples, each training example may include video footage known to be associated with surgical procedures, surgical phases, intraoperative events, and/or event characteristics, together with labels indicating locations within the video footage… a Naïve Bayes model … artificial neural networks (such as deep neural networks, convolutional neural networks, etc.) …” page 54 col. 23 lines 1-16).
With respect to claim 10, Barral, Lee and Wolf teach the medical support system according to claim 2. Wolf additionally teaches wherein a CPU (“Consistent with disclosed embodiments, “at least one processor” may constitute any physical device or group of devices having electric circuitry that performs a logic operation on an input or inputs. For example, the at least one processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations.” Page 47 col 9 lines 41-42) is further configured to calculate, as the reliability, a numerical value indicating an accuracy of the prediction result, wherein the prediction result is associated with the estimated first operation (“In various embodiments, the event identifying string may be compared with a known historical name of a corresponding intraoperative event to evaluate an associated error for the model. If the error is below a predetermined threshold value, the model may be trained using other input data. Alternatively, if the error is above the threshold value, model parameters may be modified, and a training step may be repeated using the first training data” page 146 col. 196 lines 34-41).
With respect to claim 11, Barral, Lee and Wolf teach the medical support system according to claim 2. Barral further teaches wherein the machine learning model is further configured to estimate, based on the recognized operative field environment, a plurality of candidate operations that is executable by the medical support system (“In another embodiment, the procedure and/or patient attributes can be automatically recognized using a ML medical procedure model (e.g., a selection model) that analyzes captured video or image data at the beginning of a medical procedure. Based on the procedure and patient attributes, MLM based procedure engine 250 selects an appropriate ML medical procedure model from trained MLM(s) data store 246. In one embodiment, the selected ML medical procedure model can be displayed to a medical professional via user interface 270 to enable the medical professional to accept or reject the model, as well as to configure a model with specific patient attributes, configure a model with procedural options, etc” paragraph 0032), the estimated plurality of candidate operations includes the first operation (see figure 2 element 240)
Barral does not teach the CPU calculating reliability of a plurality of candidates for the operation of the medical support system estimated by the machine learning model.
Wolf teaches a CPU (“Consistent with disclosed embodiments, “at least one processor” may constitute any physical device or group of devices having electric circuitry that performs a logic operation on an input or inputs. For example, the at least one processor may include one or more integrated circuits (IC), including application-specific integrated circuit (ASIC), microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), server, virtual server, or other circuits suitable for executing instructions or performing logic operations.” Page 47 col 9 lines 41-42) that is further configured to calculate the reliability of each of the estimated plurality of candidate operations (“In various embodiments, the event identifying string may be compared with a known historical name of a corresponding intraoperative event to evaluate an associated error for the model. If the error is below a predetermined threshold value, the model may be trained using other input data. Alternatively, if the error is above the threshold value, model parameters may be modified, and a training step may be repeated using the first training data” page 146 col. 196 lines 34-41).
With respect to claim 14, Barral, Lee and Wolf teach the medical support system according to claim 2. Barral further teaches comprising an input device (“An alphanumeric input device 575, including alphanumeric and other keys, may also be coupled to bus 515 through bus 565 for communicating information and command selections to processor 510. An additional user input device is cursor control device 580, such as a mouse, a trackball, stylus, joystick, virtual reality controller, sensor feedback controller, or cursor direction keys coupled to bus 515 through bus 565 for communicating direction information and command selections to processor 510.” Paragraph 0050) configured to receive, a user instruction, based on model information (“In one embodiment, the selected ML medical procedure model can be displayed to a medical professional via user interface 270 to enable the medical professional to accept or reject the model, as well as to configure a model with specific patient attributes, configure a model with procedural options, etc”). Barral additionally teaches wherein the CPU (“It will be apparent to those of ordinary skill in the art that the system, method, and process described herein can be implemented as software stored in main memory 550 or read only memory 520 and executed by processor 510. In embodiments, processor 510 may be a FPGA, CPU, GPU, or the like.” Paragraph 0053) is further configured to perform an autonomous learning process on the machine learning model based on operational data (“The training can include performing a machine learning model based analysis on a plurality of sets of training data of prior medical procedures, where the training data can include visual information (e.g., digital images/mono or stereo video captured by a visual system of an endoscope to identify anatomical structures), sensor information (e.g., medical tool attributes, such as speed, depth of cuts, width of spreading, etc. relative to identified anatomical structures), steps and/or deviations from steps in a medical procedure, benchmarked information (e.g., sequence of steps with expected maximum and minimum times duration, tool usage, expected variations, etc.), patient-specific information (e.g., age, sex, BMI, relevant medical history, etc.), manual annotations of medical procedure data (e.g., annotations of important steps of anatomical structures in a procedure and/or step of a procedure), temporal annotations of the steps/phases of a medical procedure, as well as other relevant information.” Paragraph 0023), and the operation data includes a second operation performed by the medical support system (“The training can include performing a machine learning model based analysis on a plurality of sets of training data of prior medical procedures…” paragraph 0023), the output first determination basis information (“…dynamically update a sequence of steps of a medical procedure given case-specific and/or action-specific information” paragraph 0021 lines 16-18 and “As discussed herein, the tracked operations provide the ML medical server with training data sets of real medical procedure data, including sensor data, outcomes, deviations, annotations, etc. By providing the medical procedure data, front end training sets (e.g., training data sets in data store 214) can be expanded so that classifiers of corresponding ML medical procedure models can be retrained, refined, expanded for new circumstances (e.g., patient characteristics, complications during a medical procedure, as well as other circumstances), etc.” paragraph 0042), and the received user instruction (“a graphical display of critical information (e.g., a visual illustration of structure XYZ), notification when an action deviates from an expected step of a predefined sequence of steps, requests confirmation before enabling an action of a medical tool (e.g., detects an attempted deviation of a step or entry into a zone of a patient's anatomy, stops the medical tool, and requests confirmation before allowing the action to proceed)”)
Wolf teaches the reliability calculated by the calculation unit (“In various embodiments, the event identifying string may be compared with a known historical name of a corresponding intraoperative event to evaluate an associated error for the model. If the error is below a predetermined threshold value, the model may be trained using other input data. Alternatively, if the error is above the threshold value, model parameters may be modified, and a training step may be repeated using the first training data” page 146 col. 196 lines 34-41).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Barral and Lee as applied to claim 7 above, and further in view of Shelton (US20220108788A1).
With respect to claim 8, Barral and Lee teach the medical support system according to claim 7. Barral further teaches wherein the medical instrument is an endoscope (“…can include minimally invasive medical tools (e.g., endoscopes) ...” paragraph 0031 lines 11-12), and multiple graphical visualizations (“Each medical procedure system may also be coupled with one or more user interfaces, such as graphical user interfaces that display data gathered by the medical tools …, provide additional visualizations” paragraph 0019) indicating the gaze region (“an insert 604 can be displayed 606 based on the current operation to inform the surgeon what anatomical structures should be within the surgeon's field of view at the completion of the dissection (e.g., the triangle of Calot 608 surrounded by other anatomical structures such as the liver, arteries, ducts, etc.)” paragraph 0041 lines 7-12)
Barral does not teach that the heat map image corresponds to an image with a wider angle of view than a monitor image of the endoscope.
Lee teaches the second heat map image (“may apply a Grad CAM technology to generate a heat map for each pixel in the surgical image” page 10 line 27), but does not teach the second heat map image having a wider angle of view than a monitor image of the endoscope.
Shelton teaches an image corresponding to an image with a wider angle of view than a monitor image of the endoscope (“For example, a user may move the images display on the secondary display to the primary display or vice versa. Displaying a narrow field of view image in a first window of a display and a composite image of several other perspectives such as wider fields of view enables rite surgeon to view a magnified image of the surgical site simultaneously with wider fields of view of the surgical site without moving the scope.” Paragraph 0282 lines 15-22).
Shelton is analogous art in the same field of endeavor as the claimed invention. Shelton is directed towards a surgical hub and operating room display (“A surgical hub and/or medical instrument may be provided for controlling a display using situational awareness” paragraph 0010 lines 1-3). A person of ordinary skill in the art before the effective filing date of the claimed invention would have found it obvious to combine the teachings of Barral and Lee with the teachings of Shelton with the expectation that doing so would enable a better view of the surgical site without necessitating the moving of surgical tools. (“During a surgical procedure the surgical site may be displayed as a narrow field of view of a medical imaging device on the primary surgical hub display. Items outside the current field of view, collateral structures, may not be viewed without moving the medical imaging device.” Paragraph 0274).
Response to Arguments
Applicant’s arguments filed 1/12/2026 have been fully considered.
With respect to the previous 35 U.S.C 112(a)(1)/(a)(2) rejection of independent claims 1, 15, and 16, applicant argues, on pages 9-11 that Barral does not teach the newly made amendments to claim 1 (and the substantially similar claims 15 and 16). The examiner agrees and has updated the previous rejection into a 35 U.S.C 103 rejection using a combination of Barral and Lee (see above claim mapping).
With respect to the previous 35 U.S.C 112(a)(1)/(a)(2) rejections of dependent claims 3, 5, 6, 12, and 13, applicant argues on page 12 that Barral does not teach the included limitations, by virtue of not teaching their parent claims and not teaching the newly added limitations. The examiner agrees and has updated their rejections in line with those of their parent claims (See above claim mapping).
With respect to the previous 35 USC 103 rejection of claims 2, 4, 10, 11, and 14, applicant argues on page 12 that “Wolf does not remedy the above-noted deficiencies of Barral.” (See page 12 section C). The examiner finds this a moot point being that Lee has been used to cure the deficiencies of Barral (see above claim mapping), meaning that Wolf or the combination of Wolf and Barral does not have to teach all of the limitations. Applicant further argues that the claims recite information not taught by any of the cited references. The examiner disagrees and points to the above claim mapping.
With respect to the previous 35 USC 103 rejection of claims 7 and 9, on page 12, the applicant asserts that Lee does not remedy the deficiencies of Barral alone or teach the applicable subject matter when taken in combination with Barral. The examiner disagrees and points to the above claim mapping. Applicant further argues that the claims recite information not taught by any of the cited references. Again, the examiner disagrees and points to the above claim mapping.
With respect to the previous rejection of claim 8, applicant argues on page 13 that “Shelton does not remedy the above-noted deficiencies of Barral.” (See page 13 section E). The examiner finds this a moot point being that Lee has been used to cure the deficiencies of Barral (see above claim mapping), meaning that Shelton or the combination of Shelton and Barral does not have to teach all of the limitations. Applicant further argues that the claims recite information not taught by any of the cited references. The examiner disagrees and points to the above claim mapping.
Ultimately all of the previously held rejections were updated and accordingly all claims are still rejected. Due to the substantial amount of updated rejections, based on the amended claim limitations, the examiner declines the request for interview, but encourages the applicant to reach out again upon consideration of the of the new 35 U.S.C 112(a), and 35 U.S.C 103 rejections.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REBECCA C WILLIAMS whose telephone number is (571)272-7074. The examiner can normally be reached M-F 7:30am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew W Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REBECCA COLETTE WILLIAMS/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677