Prosecution Insights
Last updated: April 19, 2026
Application No. 18/172,428

SYSTEM AND METHOD FOR SURGICAL TOOL BASED MODEL FUSION

Non-Final OA §103§DP
Filed
Feb 22, 2023
Examiner
GODDARD, TAMMY
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Edda Technology Inc.
OA Round
2 (Non-Final)
30%
Grant Probability
At Risk
2-3
OA Rounds
5y 4m
To Grant
49%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
41 granted / 138 resolved
-32.3% vs TC avg
Strong +20% interview lift
Without
With
+19.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
10 currently pending
Career history
148
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
59.4%
+19.4% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 138 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after allowance or after an Office action under Ex Parte Quayle, 25 USPQ 74, 453 O.G. 213 (Comm'r Pat. 1935). Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant's submission filed on 12 December 2025 has been entered. Claims 1, 4, 9, 12, 17, 20 and 23 are as previously presented and claims 2, 3, 5-8, 10, 11, 13-16, 18, 19, 21, 22 and 24 are as originally presented. In summary, claim 1-24 are pending in the application. Drawings New corrected drawings in compliance with 37 CFR 1.121(d) are required in this application because in Fig. 9, the bolded underlined 800 at the top left side of the drawing should be a bolded underlined 900 to indicate computer 900 as referred to in the specification. Applicant is advised to employ the services of a competent patent draftsperson outside the Office, as the U.S. Patent and Trademark Office no longer prepares new drawings. The corrected drawings are required in reply to the Office action to avoid abandonment of the application. The requirement for corrected drawings will not be held in abeyance. Claim Objections Claim 9 is objected to because of the following informalities: the word “A” should be the first word of the claim. Appropriate correction is required. Claim 19 is objected to because of the following informalities: Claim 19 should depend from claim 17, the independent system claim, not claim 16 as currently presented. Appropriate correction is required. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of copending Application No. 18/172,447 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other as the table below shows that the like lettered elements of claim 1 of the instant application correspond across the column to the like lettered elements of claim 1 of application No. 18/172,447. It is clear to one of ordinary skill in the art prior to the effective filing date of the application that all the elements of the application claim 1 are to be found in copending Application No. 18/172,447 claim 1 as the application claim 1 fully encompasses copending Application No. 18/172,447 claim 1. The difference between the application claim 1 and the copending Application No. 18/172,447 claim 1 lies in the fact that the copending Application No. 18/172,447 claim includes many more elements and is thus much more specific. Thus the invention of claim 1 of the copending Application No. 18/172,447 is in effect a “species” of the “generic" invention of the application claim 1. It has been held that the generic invention is “anticipated” by the “species”. See In re Goodman, 29 USPQ2d 2010 (Fed. Cir. 1993). Since application claim 1 is anticipated by claim 1 of the copending Application No. 18/172,447, it is not patentably distinct from claim 1 of the copending Application No. 18/172,447. Claims 1-24 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over the claims of Application No. 18/172,447 as the instant application has at least one examined application claim that is not patentably distinct from a reference claim of Application No. 18/172,447. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Application 18/172,428 Application 18/172,447 1. A method implemented on at least one processor, a memory, and a communication platform, comprising: (a) receiving two-dimensional (2D) images capturing (1) anatomical structures associated with an organ to be operated on in a surgery and (b) (2) a surgical instrument; (c) detecting, via the at least one processor using the 2D images, a 2D location of a surgical tool and a surgical tool type within the 2D images, (d) wherein the surgical tool is attached to the surgical instrument for performing a surgical task and is located at a tip of the surgical instrument; (e) determining, via the at least one processor based on the surgical tool type, a type of focused information to be displayed to assist a user to perform the surgical task using the surgical tool; (f) determining, based on the 2D location and via model fusion, a 2D focused region in the 2D images and (g) a corresponding 3D focused region from a (h) 3D model representing the organ and surrounding anatomical structures; (i) creating, based on the type of focused information obtained from the 3D focused region, (j) a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool. 1.A method implemented on a computer system comprising at least one processor, memory, and a communication platform, the method comprising: (h) generating, prior to a surgery, a three-dimensional (3D) model modeling at least one anatomical structure within a patient based on pre-surgery image data, (a) wherein the 3D model comprises information on a preplanned surgery trajectory with planned cut points on the surface of the organ; (a) receiving, from a laparoscopic camera, two-dimensional (2D) images capturing anatomical structures and (b) a surgical instrument present in a surgery; (d) detecting a tool attached to the surgical instrument from the 2D images, (c) wherein the tool is of a type with a pose, including a location and an orientation; (f) identifying a 2D focused region within the 2D images, wherein the 2D focused region is directly faced by a tip of the surgical instrument; (g) identifying a 3D focused region within the 3D model based on 2D anatomical features within the 2D images corresponding to corresponding 3D anatomical features in the 3D model; (i) generating focused information based on: the type of the tool; the location of the tool; and the orientation of the tool, wherein the focused information comprises at least a portion of the 3D model based on a current surgical task, the current surgical task determined based on the type, the location, and the orientation of the tool, the current surgical task occurring within the 2D focused region and the 3D focused region; and (j) displaying the focused information onto the 2D images, wherein the focused information dynamically adjusts during the surgery based on changes of at least one of the current surgical task, the type of the tool, the location of the tool, and the orientation of the tool. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3-9, 11-17 and 19-24 are rejected under 35 U.S.C. 103 as being unpatentable over Jaskola (U. S. Patent Application Publication 2024/0249487 A1, already of record, hereafter ‘487) and in view of Ben-Yishai (WO 2022249190 A1, hereafter ‘190). Regarding claim 1 (Previously Presented), Jaskola teaches a method (‘487; Abstract) implemented on at least one processor, a memory, and a communication platform (‘487; fig. 3; ¶ 0054-0055; a system 10 including a computer 14 including a memory), comprising: receiving two-dimensional (2D) images capturing (1) anatomical structures associated with an organ to be operated on in a surgery (‘487; Abstract, …laparoscopic images taken during a laparoscopic procedure using a video laparoscope (2D).. calculating a placement of a pre-generated 3D model of the object (organ) relative to the location of the object (organ) in the laparoscopic images … adjusting the representation of the 3D model based on the comparison in regions … and generating composite images using the laparoscopic images and the adjusted representation of the 3D model.) and (2) a surgical instrument (‘487; fig. 4; ¶ 0056; … a surgical instrument 8 has been detected in the laparoscopic image 2. In order to grant the surgeon a full and unobstructed view, the immediate vicinity of the surgical instrument 8 is defined as a cutout region 20 and removed from the rendering of the 3D model 4, thus clearing the view of surgical instrument 8 and its vicinity….); detecting, via the at least one processor using the 2D images, a 2D location of a surgical tool and a surgical tool type within the 2D images (‘487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2), wherein the surgical tool is attached to the surgical instrument for performing a surgical task and is located at a tip of the surgical instrument (‘487; fig. 4); determining, via the at least one processor based on the surgical tool type (487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2), a type of focused information to be displayed to assist a user to perform the surgical task using the surgical tool (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2); determining, based on the 2D location and via model fusion, a 2D focused region in the 2D images and a corresponding 3D focused region from a 3D model representing the organ and surrounding anatomical structures (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2); creating, based on the type of focused information obtained from the 3D focused region (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2), and does not teach a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool. Ben-Yishai, working in the same field of endeavor, however, teaches a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery) for the benefit of visually aiding the user during an actual surgical procedure. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for implementing a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool as taught by Ben-Yishai with the medical imaging manipulation and surgical region focused information to be displayed to assist a user to perform the surgical task using the imaged surgical tool as taught by Jaskola for the benefit of visually aiding the user during an actual surgical procedure. Regarding claim 3 (Original), Jaskola and Ben-Yishai teach the method of claim 1 and further teach wherein the step of determining the 2D and the 3D focused regions comprises: identifying 2D features from 2D images (‘487; ¶ 0025-0026, analyze the laparoscopic images (2D) relative to visibility of an object, calculate a placement of a pre-generated 3D model of the object relative to the location of the object in the laparoscopic images); identifying, from the 3D model, 3D features corresponding to the 2D features (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model), wherein the 2D and the 3D features satisfy some predetermined criteria (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model); carrying out the model fusion based on the 2D features and the corresponding 3D features to align the 3D models with respect to the location of the detected surgical tool (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold…); determining the 2D focused region and the 3D focused region based on the model fusion result (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model). Regarding claim 4 (Previously Presented), Jaskola and Ben-Yishai teach the method of claim 3 and further teach wherein the 2D focused region in the 2D images is further based on at least one of the surgical tool type, the location, and the orientation of the detected surgical tool (‘487; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2). Regarding claim 5 (Original), Jaskola and Ben-Yishai teach the method of claim 1 and further teach wherein the step of creating the visual guide comprises: projecting 3D focused information retrieved from the 3D focused region onto the 2D focused region in the 2D images (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery); and generating the visual guide based on the 2D images with the projected 3D focused information therein (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery). Regarding claim 6 (Original), Jaskola and Ben-Yishai teach the method of claim 5 and further teach the method as further comprising identifying one or more 3D non-focused regions near the 3D focused region in the 3D model (‘487; ¶ 0052, FIG. 2 illustrates a composite image 6 of a laparoscopic procedure produced according to a first embodiment. In this first embodiment, a composite image 6 of a laparoscopic image 2 and overlaid 3D model 4 of organs and other organic structures is shown. In the 3D model 4, different organic structures are represented with different colors for enhanced distinction among each other), wherein the one or more 3D non-focused regions are determined based on the detected surgical tool (‘487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2). Regarding claim 7 (Original), Jaskola and Ben-Yishai teach the method of claim 6 and further teach the method as further comprising projecting 3D non-focused information from the one or more 3D non-focused regions onto the 2D images to provide a context for the 3D focused information (‘487; ¶ 0052, FIG. 2 illustrates a composite image 6 of a laparoscopic procedure produced according to a first embodiment. In this first embodiment, a composite image 6 of a laparoscopic image 2 and overlaid 3D model 4 of organs and other organic structures is shown. In the 3D model 4, different organic structures are represented with different colors for enhanced distinction among each other). Regarding claim 8 (Original), Jaskola and Ben-Yishai teach the method of claim 7 and further teach the method as further comprising: enhancing the visual guide by performing at least one of: highlighting the presentation of the projected 3D focused information (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2), dimming the presentation of the projected 3D non-focused information (‘487; fig. 2), and generating an enlarged view of a sub-region in the 2D images where the 3D focused information is projected (‘190; page 20, lines 17-19, magnified video of the surgical field). Regarding claim 9 (Previously Presented), Jaskola teaches a machine readable and non-transitory medium having information recorded thereon (‘487; ¶ 0039), wherein, the information, when read by the machine, causes the machine to perform the following steps (‘487; ¶ 0039, Another aspect can reside in a non-volatile data storage medium containing instructions for a computer that can be configured for causing the computer to perform the above-described method. Such instructions can be loaded in a computer or its frame grabber of an above-described system): receiving two-dimensional (2D) images capturing anatomical structures associated with an organ to be operated on in a surgery (‘487; Abstract, …laparoscopic images taken during a laparoscopic procedure using a video laparoscope (2D).. calculating a placement of a pre-generated 3D model of the object (organ) relative to the location of the object (organ) in the laparoscopic images … adjusting the representation of the 3D model based on the comparison in regions … and generating composite images using the laparoscopic images and the adjusted representation of the 3D model.) and a surgical instrument (‘487; fig. 4; ¶ 0056; … a surgical instrument 8 has been detected in the laparoscopic image 2. In order to grant the surgeon a full and unobstructed view, the immediate vicinity of the surgical instrument 8 is defined as a cutout region 20 and removed from the rendering of the 3D model 4, thus clearing the view of surgical instrument 8 and its vicinity….); detecting, using the 2D images, a 2D location of a surgical tool and a surgical tool type within the 2D images (‘487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2) wherein the surgical tool is attached to the surgical instrument for performing a surgical task and is located at a tip of the surgical instrument (‘487; fig. 4); determining, based on the surgical tool type (487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2), a type of focused information to be displayed to assist a user to perform the surgical task using the surgical tool (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2); determining, based on the location and via model fusion, a 2D focused region in the 2D images and a corresponding 3D focused region from a 3D model representing the organ and surrounding anatomical structures (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2); creating, based on the type of focused information obtained from the 3D focused region (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2), and does not teach a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool. Ben-Yishai, working in the same field of endeavor, however, teaches a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery) for the benefit of visually aiding the user during an actual surgical procedure. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for implementing a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool as taught by Ben-Yishai with the medical imaging manipulation and surgical region focused information to be displayed to assist a user to perform the surgical task using the imaged surgical tool as taught by Jaskola for the benefit of visually aiding the user during an actual surgical procedure. Regarding claim 11 (Original), Jaskola and Ben-Yishai teach the medium of claim 9 and further teach wherein the step of determining the 2D and the 3D focused regions comprises: identifying 2D features from 2D images (‘487; ¶ 0025-0026, analyze the laparoscopic images (2D) relative to visibility of an object, calculate a placement of a pre-generated 3D model of the object relative to the location of the object in the laparoscopic images); identifying, from the 3D model, 3D features corresponding to the 2D features (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model), wherein the 2D and the 3D features satisfy some predetermined criteria (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model); carrying out the model fusion based on the 2D features and the corresponding 3D features to align the 3D models with respect to the location of the detected surgical tool (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold…); determining the 2D focused region and the 3D focused region based on the model fusion result (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model). Regarding claim 12 (Previously Presented), Jaskola and Ben-Yishai teach the medium of claim 11 and further teach wherein the 2D focused region in the 2D images is further determined based on at least one of the surgical tool type, the location, and the orientation of the detected surgical tool (‘487; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2). Regarding claim 13 (Original), Jaskola and Ben-Yishai teach the medium of claim 9 and further teach wherein the step of creating the visual guide comprises: projecting 3D focused information retrieved from the 3D focused region onto the 2D focused region in the 2D images (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery); and generating the visual guide based on the 2D images with the projected 3D focused information therein (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery). Regarding claim 14 (Original), Jaskola and Ben-Yishai teach the medium of claim 13 and further teach wherein the information, when read by the machine, further causes the machine to perform the step of identifying one or more 3D non-focused regions near the 3D focused region in the 3D model (‘487; ¶ 0052, FIG. 2 illustrates a composite image 6 of a laparoscopic procedure produced according to a first embodiment. In this first embodiment, a composite image 6 of a laparoscopic image 2 and overlaid 3D model 4 of organs and other organic structures is shown. In the 3D model 4, different organic structures are represented with different colors for enhanced distinction among each other), wherein the one or more 3D non-focused regions are determined based on the detected surgical tool (‘487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2). Regarding claim 15 (Original), Jaskola and Ben-Yishai teach the medium of claim 14 and further teach wherein the information, when read by the machine, further causes the machine to perform the step of projecting 3D non-focused information from the one or more 3D non-focused regions onto the 2D images to provide a context for the 3D focused information (‘487; ¶ 0052, FIG. 2 illustrates a composite image 6 of a laparoscopic procedure produced according to a first embodiment. In this first embodiment, a composite image 6 of a laparoscopic image 2 and overlaid 3D model 4 of organs and other organic structures is shown. In the 3D model 4, different organic structures are represented with different colors for enhanced distinction among each other). Regarding claim 16 (Original), Jaskola and Ben-Yishai teach the medium of claim 15 and further teach wherein the information, when read by the machine, further causes the machine to perform the step of enhancing the visual guide by performing at least one of: highlighting the presentation of the projected 3D focused information (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2), dimming the presentation of the projected 3D non-focused information (‘487; fig. 2), and generating an enlarged view of a sub-region in the 2D images where the 3D focused information is projected (‘190; page 20, lines 17-19, magnified video of the surgical field). Regarding claim 17 (Previously Presented), Jaskola teaches a system (‘487; ¶ 0054, FIG. 3 illustrates a schematic representation of an embodiment of a system 10 for manipulating laparoscopic images 2), comprising: a surgical tool assisted model fusion mechanism implemented by a processor (‘487; fig. 3; ¶ 0054-0055; a system 10 including a computer 14 including a memory) and configured for receiving two-dimensional (2D) images capturing anatomical structures associated with an organ to be operated on in a surgery (‘487; Abstract, …laparoscopic images taken during a laparoscopic procedure using a video laparoscope (2D).. calculating a placement of a pre-generated 3D model of the object (organ) relative to the location of the object (organ) in the laparoscopic images … adjusting the representation of the 3D model based on the comparison in regions … and generating composite images using the laparoscopic images and the adjusted representation of the 3D model.) and a surgical instrument (‘487; fig. 4; ¶ 0056; … a surgical instrument 8 has been detected in the laparoscopic image 2. In order to grant the surgeon a full and unobstructed view, the immediate vicinity of the surgical instrument 8 is defined as a cutout region 20 and removed from the rendering of the 3D model 4, thus clearing the view of surgical instrument 8 and its vicinity….), detecting using the 2D images a 2D location of a surgical tool and a surgical tool type within the 2D images (‘487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2) wherein the surgical tool is attached to the surgical instrument for performing a surgical task and is located at a tip of the surgical instrument (‘487; fig. 4), determining, based on the surgical tool type (487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2), a type of focused information to be displayed to assist a user to perform the surgical task using the surgical tool (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2), and determining, based on the location and via model fusion, a 2D focused region in the 2D images and a corresponding 3D focused region from a 3D model representing the organ and surrounding anatomical structures (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2); and a focused information displaying unit implemented by a processor (‘487; fig. 3; ¶ 0055, Computer 14, or frame grabber 16, as the case may be, has in its memory a 3D model 4 of organs or organic structures of the patient who is being examined with the video laparoscope 11 and configured to match the location and orientation of the 3D model 4 with the life surgical laparoscopic images 2. Once the location and orientation of the 3D model 4 relative to the laparoscopic images 2 is established, software running on computer 14 or frame grabber 16, as the case may be, combines the laparoscopic images 2 with a rendering of the 3D model 4 to form a composite image 6 which then is displayed on a screen 18) and configured for creating, based on the type of focused information obtained from the 3D focused region (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2), and does not teach a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool. Ben-Yishai, working in the same field of endeavor, however, teaches a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery) for the benefit of visually aiding the user during an actual surgical procedure. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for implementing a visual guide that projects the type of focused information onto the 2D focused region to assist the user to perform the surgical task using the surgical tool as taught by Ben-Yishai with the medical imaging manipulation and surgical region focused information to be displayed to assist a user to perform the surgical task using the imaged surgical tool as taught by Jaskola for the benefit of visually aiding the user during an actual surgical procedure. Regarding claim 19 (Original), Jaskola and Ben-Yishai teach the system of claim {16} 17 and further teach wherein the step of determining the 2D and the 3D focused regions comprises: identifying 2D features from 2D images (‘487; ¶ 0025-0026, analyze the laparoscopic images (2D) relative to visibility of an object, calculate a placement of a pre-generated 3D model of the object relative to the location of the object in the laparoscopic images); identifying, from the 3D model, 3D features corresponding to the 2D features (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model), wherein the 2D and the 3D features satisfy some predetermined criteria (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model); carrying out the model fusion based on the 2D features and the corresponding 3D features to align the 3D models with respect to the location of the detected surgical tool (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold…); determining the 2D focused region and the 3D focused region based on the model fusion result (‘487; ¶ 0027-0028, … compare at least one of color information and brightness information of the visual representation of the 3D model and of the laparoscopic images at the location of the 3D model and adjust the representation of the 3D model based on the result of the comparison in regions where the difference is below a predefined threshold, and generate composite images using of the laparoscopic images and the adjusted representation of the 3D model). Regarding claim 20 (Previously Presented), Jaskola and Ben-Yishai teach the system of claim 19 and further teach wherein the 2D focused region in the 2D images is further determined based on at least one of the surgical tool type, the location, and the orientation of the detected surgical tool (‘487; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2). Regarding claim 21 (Original), Jaskola and Ben-Yishai teach the system of claim 17 and further teach wherein the step of creating the visual guide comprises: projecting 3D focused information retrieved from the 3D focused region onto the 2D focused region in the 2D images (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery); and generating the visual guide based on the 2D images with the projected 3D focused information therein (‘190; page 20, lines 17-25; Turning to Fig. lC, shown is a user 120 observing user display 102 (e.g., HMD) to view a video (e.g., magnified video) of a surgical field 124 while performing a surgical procedure on a patient 122, for example with the aid of various tools (e.g., tracked tools and\or tools that are not 20 tracked). In some embodiments, camera system 112 acquires a stream of images (e.g., live video) of surgical field 124, corresponding to surgical procedure performed on patient 122. Computer 118 receives and processes the stream of images and transmits the processed images to user display 102. User 120 views the images via user display 102. According to one example, computer 118 may overlay guidance information and verification symbols on the 25 stream of images, which aids user 120 during surgery). Regarding claim 22 (Original), Jaskola and Ben-Yishai teach the system of claim 21 and further teach wherein the surgical tool assisted model fusion mechanism is further configured for identifying one or more 3D non-focused regions near the 3D focused region in the 3D model (‘487; ¶ 0052, FIG. 2 illustrates a composite image 6 of a laparoscopic procedure produced according to a first embodiment. In this first embodiment, a composite image 6 of a laparoscopic image 2 and overlaid 3D model 4 of organs and other organic structures is shown. In the 3D model 4, different organic structures are represented with different colors for enhanced distinction among each other), wherein the one or more 3D non-focused regions are determined based on the detected surgical tool (‘487; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; ¶ 0067,…the AI model 34 can identify the type, location and orientation of specific organs or organic structures represented in the patient specific 3D model, or the location and, if applicable, type and/or orientation, of a surgical instrument, in the laparoscopic images 2). Regarding claim 23 (Previously Presented), Jaskola and Ben-Yishai teach the system of claim 22 and further teach wherein the focused information displaying unit is further configured for projecting 3D non-focused information from the one or more 3D non-focused regions onto the 2D images to provide a context for the 3D focused information (‘487; ¶ 0052, FIG. 2 illustrates a composite image 6 of a laparoscopic procedure produced according to a first embodiment. In this first embodiment, a composite image 6 of a laparoscopic image 2 and overlaid 3D model 4 of organs and other organic structures is shown. In the 3D model 4, different organic structures are represented with different colors for enhanced distinction among each other). Regarding claim 24 (Original), Jaskola and Ben-Yishai teach the system of claim 23 and further teach wherein the focused information displaying unit is further configured for enhancing the visual guide by performing at least one of: highlighting the presentation of the projected 3D focused information (487; fig. 4, element 20; ¶ 0021, A further embodiment can include detecting a surgical instrument in the laparoscopic images and adjusting the visual representation of the 3D model to avoid obscuring the surgical instrument. This can be done by making the 3D model visualization more transparent in the region of the laparoscopic image containing the surgical instrument or by cutting out the part of the organ that would be obscured by the surgical instrument from the rendering of the 3D model of the object. This can make the surgical instrument appear to be in front of the 3D model of the object in perspective. The detection of the surgical instrument can be performed by at least one of object recognition and edge detection, for example; the 3D model 4 of the organic structure overlays over the real time laparoscopic image 2 solely as an outline, in this case a high contrast contour 22. The high contrast contour 22 has the form of a double line of different colors, the two lines being immediately adjacent to one another. In this case, one of the lines is white and the other black, since white and black provide the starkest possible contrast in brightness. Such a high contrast contour 22 will be visible over any background structures, since in regions with low brightness, the white line will stand out whereas in regions with high brightness, the black line will stand out. In addition or alternatively to the black and white lines, colored lines having a high color contrast can be used. The colors can also be chosen to have the best possible color contrast to the red color background of typical laparoscopic images 2), dimming the presentation of the projected 3D non-focused information (‘487; fig. 2), and generating an enlarged view of a sub-region in the 2D images where the 3D focused information is projected (‘190; page 20, lines 17-19, magnified video of the surgical field). Claims 1, 3-9, 11-17 and 19-24 are rejected under 35 U.S.C. 103 as being unpatentable over Jaskola (U. S. Patent Application Publication 2024/0249487 A1, already of record, hereafter ‘487) as applied to claims 1, 3-9, 11-17 and 19-24 above, and in view of Ben-Yishai (WO 2022249190 A1, hereafter ‘190) as applied to claims 1, 3-9, 11-17 and 19-24 above, and further in view of Tsukagoshi et al. (U. S. Patent Application Publication 2014/0132605 A1, hereafter ‘605). Regarding claim 2 (Original), Jaskola and Ben-Yishai teach the method of claim 1 and do not teach wherein the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ. Tsukagoshi, working in the same field of endeavor, however, teaches wherein the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ (‘605; ¶ 0176, …as shown in FIG. 13, the medical device is inserted into the subject; however, the terminal apparatus 240 may be configured so as to receive an operation to pinch or pull a blood vessel while using a medical device such as tweezers (or hook) or an operation to make an incision on the surface of an organ while using a scalpel or medical scissors….) for the benefit of supporting detection and visualizing of a number of common surgical tool types. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for implementing the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ as taught by Tsukagoshi with the medical imaging manipulation and surgical region focused information to be displayed to assist a user to perform the surgical task using the imaged surgical tool as taught by Jaskola in view of Ben-Yishai for the benefit of supporting detection and visualizing of a number of common surgical tool types. Regarding claim 10 (Original), Jaskola and Ben-Yishai teach the medium of claim 9 and do not teach wherein the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ. Tsukagoshi, working in the same field of endeavor, however, teaches wherein the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ (‘605; ¶ 0176, …as shown in FIG. 13, the medical device is inserted into the subject; however, the terminal apparatus 240 may be configured so as to receive an operation to pinch or pull a blood vessel while using a medical device such as tweezers (or hook) or an operation to make an incision on the surface of an organ while using a scalpel or medical scissors….) for the benefit of supporting detection and visualizing of a number of common surgical tool types. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for implementing the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ as taught by Tsukagoshi with the medical imaging manipulation and surgical region focused information to be displayed to assist a user to perform the surgical task using the imaged surgical tool as taught by Jaskola in view of Ben-Yishai for the benefit of supporting detection and visualizing of a number of common surgical tool types. Regarding claim 18 (Original), Jaskola and Ben-Yishai teach the system of claim 17 and do not teach wherein the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ. Tsukagoshi, working in the same field of endeavor, however, teaches wherein the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ (‘605; ¶ 0176, …as shown in FIG. 13, the medical device is inserted into the subject; however, the terminal apparatus 240 may be configured so as to receive an operation to pinch or pull a blood vessel while using a medical device such as tweezers (or hook) or an operation to make an incision on the surface of an organ while using a scalpel or medical scissors….) for the benefit of supporting detection and visualizing of a number of common surgical tool types. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for implementing the detected surgical tool is one from a list comprising a surgical knife and a surgical hook; and the type of focused information relevant to the surgical knife corresponds to information related to the organ; and the type of focused information relevant to the surgical hook corresponds to information related to blood vessels near or in the organ as taught by Tsukagoshi with the medical imaging manipulation and surgical region focused information to be displayed to assist a user to perform the surgical task using the imaged surgical tool as taught by Jaskola in view of Ben-Yishai for the benefit of supporting detection and visualizing of a number of common surgical tool types. Conclusion The following prior art, made of record, was not relied upon but is considered pertinent to applicant's disclosure: US 20210345893 A1 Indicator System - An indicator system for a surgical robotic system for indicating a state of at least a portion of patient anatomy, the surgical robotic system comprising a robot having a base and an arm extending from the base, the arm holding an endoscope at an end of the arm distal from the base, the endoscope being configured for insertion into a body cavity of the patient for observing a surgical site internal to a body of the patient, the indicator system comprising: a receiver configured to receive video data of at least a portion of patient anatomy at the surgical site from the endoscope; and a processor configured to: detect a spatial-temporal change in a pixel region of the received video data; identify, in response to the detected spatial-temporal change, a parameter of the patient anatomy; generate a health indicator indicative of the identified parameter or indicative of a profile of the identified parameter; and output the generated health indicator. US 20210113273 A1 Medical Device Navigation Using a Virtual 3d Space - A system and method for providing image guidance for placement of one or more medical devices at a target location. The system can determine one or more intersections between a medical device and an image region based at least in part on first emplacement data and second emplacement data. Using the determined intersections, the system can cause one or more displays to display perspective views of image guidance cues, including an intersection ghost in a virtual 3D space. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD MARTELLO whose telephone number is (571)270-1883. The examiner can normally be reached on M-F from 9AM to 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached at telephone number (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD MARTELLO/ Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Feb 22, 2023
Application Filed
Dec 07, 2024
Non-Final Rejection — §103, §DP
Apr 14, 2025
Response Filed
Jul 23, 2025
Request for Continued Examination
Jul 24, 2025
Response after Non-Final Action
Sep 24, 2025
Response after Non-Final Action
Dec 12, 2025
Request for Continued Examination
Feb 11, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573004
GENERATIVE IMAGE FILLING USING A REFERENCE IMAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12548257
Systems and Methods for 3D Facial Modeling
2y 5m to grant Granted Feb 10, 2026
Patent 12530839
RELIGHTABLE NEURAL RADIANCE FIELD MODEL
2y 5m to grant Granted Jan 20, 2026
Patent 12462480
IMAGE PROCESSING METHOD
2y 5m to grant Granted Nov 04, 2025
Patent 10140972
TEXT TO SPEECH PROCESSING SYSTEM AND METHOD, AND AN ACOUSTIC MODEL TRAINING SYSTEM AND METHOD
2y 5m to grant Granted Nov 27, 2018
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
30%
Grant Probability
49%
With Interview (+19.5%)
5y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 138 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month