DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Application is a continuation of International Patent Application No. PCT/JP2022/005122, filed on 09 February 2022 in the Japan Patent Office, and was published 17 August 2023 as WO2023152829.
Specification
The disclosure is objected to because of the following informalities: In the tenth line of paragraph [0005] of the specification, the word “first” should be the word “second;” {second CPU accesses the first storage and execute the one or more second programs to cause}. Appropriate correction is required.
Claim Objections
Claim 1 is objected to because of the following informalities: In the twelfth line of the claim, the word “first” should be the word “second;” {and a second central processing unit (CPU), and the second CPU accesses the first storage and}. Appropriate correction is required.
Support for the Examiner’s objections to the specification and claim 1 as presented above, is found in paragraphs [0050] - [0051] and [0058] - [0059] of the specification and figs. 6 and 7 of the drawings.
[0050] FIG. 6 is a block diagram illustrating an exemplary configuration of the operation-side device 2, according to an embodiment. As illustrated in FIG. 6, the operation-side device 2 has a configuration including a computer provided with a Central Processing Unit (CPU) 13, a Read Only Memory (ROM) 14, a Random Access Memory (RAM) 15, and a Graphics Processing Unit (GPU) 16, in addition to the endoscope 11 and the 3D monitor 12.
[0051] The CPU 13 executes various kinds of processing according to programs stored in the ROM 14 and/or a storage unit 19. The RAM 15 stores, for example, data for executing the various kinds of processing by the CPU 13.
[0058] FIG. 7 is a block diagram illustrating an exemplary configuration of the instruction-side device 3, according to an embodiment. As illustrated in FIG. 7, the instruction-side device 3 has a configuration including a computer provided with a CPU 23, a ROM 24, a RAM 25, a communication unit 28, and a GPU 26, in addition to the 3D monitor 21 and the three-dimensional position detecting unit 22.
[0059] The CPU 23 executes various kinds of processing according to programs stored in the ROM 24 or a storage unit 29. The RAM 25 stores, for example, data for executing the various kinds of processing by the CPU 23.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-7 are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Navkar et al. (U. S. Patent Application Publication 2022/0079705, hereafter ‘705).
Regarding claim 1, Navkar teaches an assistance system (‘705; figs. 15A, 15B and 16; ¶ 0055, an assistance system) comprising: an assisted device (‘705; figs. 4B, 5B, 15B; Operating Room Location for the Mentee – user receiving coaching/assistance/instructions) and an assistance device (‘705; figs. 4A, 5A and 15A; Remote Location for the Mentor – user providing coaching/assistance/instructions) disposed to be separated from the assisted device (‘705; figs. 4A&B, 5A&B, 15A&B and 16; operating room workstation network connected to a remote location workstation) wherein: the assistance device (‘705; figs. 4A, 5A and 15A; Remote Location for the Mentor – user providing coaching/assistance/instructions) includes a first storage (‘705; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor) that stores one or more first programs (‘705; fig. 4A, software modules 404-408; ¶ 0084) and a first central processing unit (CPU) (‘705; figs. 4A, 5A and 15A; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor), and the first CPU accesses the first storage and executes the one or more first programs to cause the first CPU (‘705; figs. 4A, 5A and 15A; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor; fig. 4A, software modules 404-408; ¶ 0084) to implement: a three-dimensional position detecting unit (‘705; fig. 2(a), element 205, infrared camera) that three-dimensionally detects an instruction content of an instructor (‘705; ¶ 0078, The laptop 201 is connected with the operating room over a network. The remote surgeon 202 is able to see the original view 203, as seen by local surgeon, and generates an augmented view 204, which includes the virtual tooltips movements. The infrared camera 205 captures the remote surgeon's hand-gestures and generates the movements of the virtual tooltips. The augmented view 204 is sent back to local surgeon over the network for assistance); and a data transmission unit (‘705; fig. 4A, element 407, network module) that transmits instruction content information indicating the instruction content (‘705; ¶ 0078, The laptop 201 is connected with the operating room over a network. The remote surgeon 202 is able to see the original view 203, as seen by local surgeon, and generates an augmented view 204, which includes the virtual tooltips movements. The infrared camera 205 captures the remote surgeon's hand-gestures and generates the movements of the virtual tooltips. The augmented view 204 is sent back to local surgeon over the network for assistance), and the assisted device (‘705; figs. 4B, 5B, 15B; Operating Room Location for the Mentee – user receiving coaching/assistance/instructions) includes a second storage (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device (second storage) in communication with the at least one processor) that stores one or more second programs (‘705; fig. 5B, software modules; ¶ 0095, The operating room workstation 501′ includes six software modules interfacing with the hardware units, processing the data, and continuously communicating with each other) and a second central processing unit (CPU) (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor), and the second CPU (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor) accesses the second storage (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor) and execute the one or more second programs (‘705; fig. 5B, software modules; ¶ 0095, The operating room workstation 501′ includes six software modules interfacing with the hardware units, processing the data, and continuously communicating with each other) to cause the second CPU (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor) to implement: a first image generation unit (‘705; fig. 5B, element 510′; ¶ 0101, The Graphical Rendering Module 510′ renders the information fetched from the Core Processing Module 505′ onto the visualization screen 502′) that generates a three-dimensional annotation image indicating the instruction content of the instructor based on the transmitted instruction content information (‘705; ¶ 0078, The laptop 201 is connected with the operating room over a network. The remote surgeon 202 is able to see the original view 203, as seen by local surgeon, and generates an augmented view 204, which includes the virtual tooltips movements. The infrared camera 205 captures the remote surgeon's hand-gestures and generates the movements of the virtual tooltips. The augmented view 204 is sent back to local surgeon over the network for assistance); and a first image combining unit (‘705; fig. 5B, element 505’, core processing module) that combines the three-dimensional annotation image with a captured image taken by the assisted device to generate a first combined image (‘705; ¶ 0096, The Core Processing Module 505′ acts as a central core for processing data at the operating room workstation 501′. The Core Processing Module 505′ receives data from the Graphical User Interface (GUI) Module 506′, the Video Module 507′, the Tracking Module 508′, the Network Module 509′, and sends data to the Graphical Rendering Module 510′ and the Network Module 509′; ¶ 0101, The Graphical Rendering Module 510′ renders the information fetched from the Core Processing Module 505′ onto the visualization screen 502′.).
Regarding claim 2, Navkar teaches the assistance system according to claim 1 and further teaches wherein: the three-dimensional position detecting unit three-dimensionally detects a hand position of the instructor (‘705; ¶ 0077, The remote surgeon will get connected with the operating room for tele-collaboration via laptop over network and an infrared camera. FIG. 2a shows a setup at remote surgeon's end comprising of a laptop and an infrared camera. The infrared camera can be a low-cost compact device with infra-red LEDs to capture the hand-gestures for control of the virtual tool (FIG. 2a)), and the first CPU of the assistance device accesses the first storage and executes the one or more first programs to cause the first CPU to further implement: a second image generation unit that generates a three-dimensional hand model image based on position information indicating the hand position of the instructor detected by the three-dimensional position detecting unit (‘705; fig. 2(a), element 205, infrared camera); and a second image combining unit (‘705; fig. 4A, element 406; ¶ 0086, The Augmentation Module 406 is responsible for rendering the augmented-reality scene on the visualization screen. It receives input in form of video frame from Network Module 407 and decision to render tooltip or complete tool from Control Logic Module 405. Based on the input, the module renders the augmented reality scene consisting of three-dimensional computer graphics rendered on a video stream) that combines the three-dimensional hand model image with the captured image transmitted from the assisted device to generate a second combined image (‘705; ¶ 0078, The laptop 201 is connected with the operating room over a network. The remote surgeon 202 is able to see the original view 203, as seen by local surgeon, and generates an augmented view 204, which includes the virtual tooltips movements. The infrared camera 205 captures the remote surgeon's hand-gestures and generates the movements of the virtual tooltips).
Regarding claim 3, Navkar teaches the assistance system according to claim 2 and further teaches wherein: the second image generation unit generates the three-dimensional hand model image and the three-dimensional annotation image based on the position information and the instruction content information (‘705; ¶ 0078, The laptop 201 is connected with the operating room over a network. The remote surgeon 202 is able to see the original view 203, as seen by local surgeon, and generates an augmented view 204, which includes the virtual tooltips movements. The infrared camera 205 captures the remote surgeon's hand-gestures and generates the movements of the virtual tooltips), and the second image combining unit combines the three-dimensional hand model image and the three-dimensional annotation image with the captured image to generate a third combined image (‘705; ¶ 0078, The laptop 201 is connected with the operating room over a network. The remote surgeon 202 is able to see the original view 203, as seen by local surgeon, and generates an augmented view 204, which includes the virtual tooltips movements. The infrared camera 205 captures the remote surgeon's hand-gestures and generates the movements of the virtual tooltips).
Regarding claim 6, Navkar teaches an assistance device (‘705; figs. 4A, 5A and 15A; Remote Location for the Mentor – user providing coaching/assistance/instructions) disposed to be separated from an assisted device (‘705; figs. 4B, 5B, 15B; Operating Room Location for the Mentee – user receiving coaching/assistance/instructions), the assistance device (‘705; figs. 4A, 5A and 15A; Remote Location for the Mentor – user providing coaching/assistance/instructions) comprising a storage (‘705; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor) that stores one or more programs (‘705; fig. 4A, software modules 404-408; ¶ 0084) and a central processing unit (CPU) (‘705; figs. 4A, 5A and 15A; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor), and the CPU accesses the storage and executes the one or more programs to cause the CPU (‘705; figs. 4A, 5A and 15A; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor; fig. 4A, software modules 404-408; ¶ 0084) to implement: a three-dimensional position detecting unit (‘705; fig. 2(a), element 205, infrared camera) that three-dimensionally detects an instruction content of an instructor (‘705; ¶ 0078, The laptop 201 is connected with the operating room over a network. The remote surgeon 202 is able to see the original view 203, as seen by local surgeon, and generates an augmented view 204, which includes the virtual tooltips movements. The infrared camera 205 captures the remote surgeon's hand-gestures and generates the movements of the virtual tooltips.); and a data transmission unit (‘705; fig. 4A, element 407, network module) that transmits instruction content information indicating the instruction content detected by the three-dimensional position detecting unit to the assisted device (‘705; ¶ 0078, The laptop 201 is connected with the operating room over a network. The remote surgeon 202 is able to see the original view 203, as seen by local surgeon, and generates an augmented view 204, which includes the virtual tooltips movements. The infrared camera 205 captures the remote surgeon's hand-gestures and generates the movements of the virtual tooltips. The augmented view 204 is sent back to local surgeon over the network for assistance).
Regarding claim 7, Navkar teaches an assisted device (‘705; figs. 4B, 5B, 15B; Operating Room Location for the Mentee – user receiving coaching/assistance/instructions) disposed to be separated from an assistance device (‘705; figs. 4A, 5A and 15A; Remote Location for the Mentor – user providing coaching/assistance/instructions), the assisted device comprising a storage (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor) that stores one or more programs (‘705; fig. 5B, software modules; ¶ 0095, The operating room workstation 501′ includes six software modules interfacing with the hardware units, processing the data, and continuously communicating with each other) and a central processing unit (CPU) (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor), and the CPU accesses the storage (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor) and executes the one or more programs (‘705; fig. 5B, software modules; ¶ 0095, The operating room workstation 501′ includes six software modules interfacing with the hardware units, processing the data, and continuously communicating with each other) to cause the CPU (‘705; figs. 4B, 5B, 15B; ¶ 0034, a computing system comprising at least one processor and a data storage device in communication with the at least one processor) to implement: a first image generation unit (‘705; fig. 5B, element 510′; ¶ 0101, The Graphical Rendering Module 510′ renders the information fetched from the Core Processing Module 505′ onto the visualization screen 502′) that generates a three-dimensional annotation image indicating an instruction content of an instructor based on instruction content information indicating the instruction content of the instructor, the instruction content being three-dimensionally detected in the assistance device (‘705; figs. 4A, 5A and 15A; Remote Location for the Mentor – user providing coaching/assistance/instructions) and being received from the assistance device (‘705; figs. 4A, 5A and 15A; Remote Location for the Mentor – user providing coaching/assistance/instructions); and a first image combining unit (‘705; fig. 5B, element 505’, core processing module) that combines the three-dimensional annotation image with a captured image taken by the assisted device to generate a first combined image (‘705; ¶ 0096, The Core Processing Module 505′ acts as a central core for processing data at the operating room workstation 501′. The Core Processing Module 505′ receives data from the Graphical User Interface (GUI) Module 506′, the Video Module 507′, the Tracking Module 508′, the Network Module 509′, and sends data to the Graphical Rendering Module 510′ and the Network Module 509′; ¶ 0101, The Graphical Rendering Module 510′ renders the information fetched from the Core Processing Module 505′ onto the visualization screen 502′).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 4and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Navkar et al. (U. S. Patent Application Publication 2022/0079705, hereafter ‘705) as applied to claims 1-3, 6 and 7 above, and in view of Katsuki et al. (U. S. Patent Application Publication 2023/0222740 A1, hereafter ‘740).
Regarding claim 4, Navkar teaches the assistance system according to claim 1 and does not teach wherein: the first image generation unit generates a right eye annotation image and a left eye annotation image based on the instruction content information using a shader process of a graphic processing unit (GPU), and combines the right eye annotation image and the left eye annotation image by a line-by-line method to generate the three-dimensional annotation image.
Katsuki, working in the same field of endeavor, however, teaches the first image generation unit generates a right eye annotation image and a left eye annotation image based on the instruction content information (‘740; fig. 3, fig. 4, element 202; ¶ 0081, the imaging element constituting the imaging unit 5009 is configured to include a pair of imaging elements for acquiring a right-eye image signal and a left-eye image signal that correspond to 3D display. The provision of 3D display allows the surgeon 5067 to more accurately recognize the depth of biological tissue in the surgical part; ¶ 0110, Referring to FIG. 4 again, the acquisition unit 302 acquires a real-time 3D surgical image generated by the 3D data generating unit 202 and the indicated 3D model image 206d (see FIG. 6). In other words, the acquisition unit 302 acquires a real-time 3D surgical image of an operation site, which can be stereoscopically viewed by a surgeon, and the 3D model image 206d that is a stereoscopic CG image associated with the 3D surgical image.) using a shader process of a graphic processing unit (GPU) (‘740; fig. 3; ¶ 0091-0092, examples of GPU based image processing – “shader” functionality), and combines the right eye annotation image and the left eye annotation image by a line-by-line method to generate the three-dimensional annotation image (‘740; fig. 10; ¶ 0017; the present disclosure provides a surgical image control device including: an acquisition unit that acquires a real-time 3D surgical image of an operation site stereoscopically viewable by a surgeon and a 3D model image that is a stereoscopic CG image associated with the 3D surgical image; and a superimposition unit that performs enhancement such that the location of the 3D model image at predetermined spatial positions is enhanced with respect to the 3D surgical image or the 3D model image at the start of superimposition of the 3D model image at the predetermined spatial positions when the 3D surgical image is stereoscopically viewed on the basis of information set for the 3D model image) for the benefit of providing a right-eye image and a left-eye image for a surgeon to enable the surgeon to three-dimensionally view the operation site.
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to have combined the techniques for using shader processes (programs) so that a graphic processing unit (GPU) generates right-eye images and corresponding left-eye images with varying amounts of a horizontal parallax to enable the scene images to be perceived by the user as a three-dimensional image of the scene as taught by Katsuki with the as taught by Naykar for the benefit of providing a right-eye image and a left-eye image for a surgeon to enable the surgeon to three-dimensionally view the operation site.
Regarding claim 5, Navkar and Katsuki teach the assistance system according to claim 4 and further teach wherein: the first image generation unit displaces positions of the right eye annotation image and the left eye annotation image in a horizontal direction to perform a parallax adjustment of the three-dimensional annotation image (‘740; fig. 10; ¶ 0017; the present disclosure provides a surgical image control device including: an acquisition unit that acquires a real-time 3D surgical image of an operation site stereoscopically viewable by a surgeon and a 3D model image that is a stereoscopic CG image associated with the 3D surgical image; and a superimposition unit that performs enhancement such that the location of the 3D model image at predetermined spatial positions is enhanced with respect to the 3D surgical image or the 3D model image at the start of superimposition of the 3D model image at the predetermined spatial positions when the 3D surgical image is stereoscopically viewed on the basis of information set for the 3D model image; ¶ 0113, Stereoscopic vision in the present embodiment means, for example, vergence, accommodation, binocular disparity, and motion parallax but is not thereto. Vergence is, for example, stereoscopic vision that allows perception of a depth according to the principle of triangulation from the rotation angles of right and left eyes when a point is closely observed. Accommodation is stereoscopic vision that allows perception of a depth by adjusting the focus of eyes. Binocular disparity is stereoscopic vision that allows perception of a depth according to a horizontal displacement between corresponding points in the retinal presentation of right and left eyes. Motion parallax is stereoscopic vision that allows perception of a depth on the basis of a change of a retinal image when a viewpoint moves. To achieve such stereoscopic vision, for example, a lenticular-lens monitor or a barrier monitor can be used as the display device 400. Alternatively, in the display of the 3D model image 206d, a method of displaying images with a horizontal parallax in the display part of the display device 400 may be used such that the images are viewed by right and left eyes, respectively, through glasses on an operator. In this case, for example, a method using polarized glasses, an anaglyph, or a liquid-crystal shutter is available).
Conclusion
The following prior art, made of record, was not relied upon but is considered pertinent to applicant's disclosure:
US 20240187560 A1 Three-Dimensional Annotation Rendering System – A three-dimensional annotation rendering system includes a calculation device receiving signals captured by right-eye and left-eye cameras, a background image generation unit generating right-eye and left-eye background images, a pointer depth position information generation unit, a pointer longitudinal and lateral position information generation unit, an annotation start/end information generation unit, an annotation-related information storage unit storing depth position information and longitudinal and lateral position information on the pointer from a recording start to a recording end on the annotation, a pointer image generation unit generating right-eye and left-eye pointer images, an annotation-related image generation unit generating right-eye and left-eye annotation-related images, and a background annotation image synthesis unit combining the right-eye and left-eye background images, pointer images, and annotation-related images to generate final right-eye and left-eye images.
US 20210393343 A1 Interactive Information Transfer System, Interactive Information Transfer Method, And Information Transfer System – As an expert gives instructions to cause movements of their own hands transferred directly as tactile force sense while perceiving surrounding information of a collaborator on a real-time basis and watching the collaborator's line-of-sight end and movements of the collaborator's hands, the collaborator indirectly receives the instructions of the expert's manual skills, which are the expert's tacit knowledge, on a real-time basis while sharing realistic sensations with the expert who is at a remote location when the collaborator performs an act of working with their own hands.
US 20230105111 A1 System And Method For Teaching Minimally Invasive Interventions – The present disclosure provides a computer implemented method for facilitating a teaching surgeon to teach or assist a learning surgeon in minimally invasive interventions using a surgical instrument including displaying endoscopic images of an intervention site in real time on a display device, said endoscopic images being captured using a camera associated with an endoscopic instrument, and tracking a movement of one or both hands of said teaching surgeon and/or a device held by said teaching surgeon, using a real-time tracking apparatus, wherein said real-time tracking apparatus comprises a tracking system and a computing device, wherein said tracking of said movement comprises recording a sequence of tracking information of one or both hands of said teaching surgeon and/or the device held by said teaching surgeon.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD MARTELLO whose telephone number is (571)270-1883. The examiner can normally be reached on M-F from 9AM to 5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached at telephone number (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EDWARD MARTELLO/Primary Examiner, Art Unit 2611