DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Priority
Receipt is acknowledged that application claims priority to foreign application with application number JP2023-142512 dated September 01, 2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated August 20, 2024 has been considered and placed in the application file.
Status of the Claims
Claims 1, 7, 8, and 9 are amended.
Claim 6 is canceled.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3, and 7-9 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by US Patent Application Publication US 2019/0304107 A1, (SAKURAGI) (hereinafter Sakuragi).
Regarding Claim 1, Sakuragi teaches an information processing system comprising: at least one processor, wherein the processor is configured to: (Sakuragi Figs. 1&2, “[0037] …the computer 6 is connected to an image server 8 that stores an image or the like of the subject 5 through a network. The image server 8 stores a 3-dimensional image or the like acquired by a CT apparatus, an MRI apparatus, an ultrasound apparatus, or the like, for example.”; “[0039] The computer 6 is a computer in which an additional information display program of this embodiment is installed. The computer 6 may be a work station or a personal computer that is directly operated by a doctor who performs diagnosis, or may be a server computer connected to the work station or the personal computer through a network.”; “[0040] As shown in FIG. 2, the additional information display device 2 comprises a central processing unit (CPU) 11 that is a processor, a memory 12, and a storage 13, as a configuration of a standard work station.”; and “[0043]…computer 6 functions as an image acquisition section 21, a detection section 22, an additional information determination section 23, an alignment section 24, and a display controller 25.”)
acquire a visual field image showing a visual field of a user; (Sakuragi Figs. 3 & 6, “[0044] The image acquisition section 21 acquires the captured image obtained by the camera 9 attached to the HMD 7. FIG. 3 is a diagram showing an example of a…captured image G2…”; “[0028] FIG. 6 is a diagram showing an image seen by an operator who wears an HMD.”; and “[0037] A camera 9 that performs imaging in a sight direction of the operator of the HMD 7 is attached to the HMD 7.”)
acquire, in a case where the visual field image includes a target region of interest that is a predetermined type of region of interest, relevant information related to the target region of interest; and (Sakuragi Figs. 3-12, “[0038] …the operator who wears the HMD 7 observes an image displayed on the display 4, additional information relating to the image displayed on the display 4 is displayed on the HMD 7…”; “[0044] As shown in FIG. 3, a captured image G2 includes the display 4 on which a display image G1 is displayed. The display image G1 is an angiographic image, and includes blood vessels contrasted by a contrast agent, for example, in the heart.”; “[0045] The detection section 22 detects the display 4 from the captured image G2. The detection of the display 4 may be performed by detecting a rectangular area from the captured image G2.”; “[0046] The additional information determination section 23 decides additional information relating to the display image G1…and additional information A1 is determined from the determination result.”; and “[0059] …the additional information A1 is not limited to an image…vital information, text information on a diagnosis result or the like, a graph indicating variation of the vital information, or the like may be used as the additional information A1.”)
perform control of causing a first display device viewed by the user to display the relevant information to be superimposed on the target region of interest (Sakuragi “[0047] …the alignment is performed so that the additional image Al that is the ray-sum image is superimposed with a corresponding blood vessel of the display image G1 included in the display 4.”; “[0051] The display controller 25 displays the aligned additional information A1 on the HMD 7. Here, a position where the additional information A1 in the HMD 7 is to be displayed is converted so that the additional information A1 is aligned to the display image G1 included in the display 4 that is seen by the operator who wears the HMD 7 through the HMD 7…the converted additional information A1 is displayed on the display image G1 of the display 4 that is seen by the operator through the HMD 7 in an overlapping manner.”; and “[0058] …the alignment section 24 may receive a command from an input through the input section 14, and may select whether to convert the captured image G2 so that the shape of the display 4 becomes rectangular or to align the shape of the additional information A1 with the captured image G2.”)
;and acquire a content displayed on a second display device as the relevant information (Sakuragi “[0042] … the additional information display program regulates an image acquisition process of acquiring a captured image acquired by imaging the display 4 on which an image is displayed by the camera 9…”; and “[0056] In FIG. 9, the additional information A1 that is a model image of the liver including blood vessels and tumors of the liver is displayed on the liver included in the display image G1 displayed on the display 4 in an overlapping manner.”)
wherein the target region of interest is a region of the second display device, which is different from the first display device (Sakuragi “[0036] The display 4 corresponds to first display unit.”; and “[0037] The HMD 7 corresponds to second display unit.”)
Claim 8 is directed to an information processing method executed by a computer (Sakuragi Figs. 1&2, “[0037] …the computer 6 is connected to an image server 8 that stores an image or the like of the subject 5 through a network. The image server 8 stores a 3-dimensional image or the like acquired by a CT apparatus, an MRI apparatus, an ultrasound apparatus, or the like, for example.”; “[0039] The computer 6 is a computer in which an additional information display program of this embodiment is installed. The computer 6 may be a work station or a personal computer that is directly operated by a doctor who performs diagnosis, or may be a server computer connected to the work station or the personal computer through a network.”; “[0040] As shown in FIG. 2, the additional information display device 2 comprises a central processing unit (CPU) 11 that is a processor, a memory 12, and a storage 13, as a configuration of a standard work station.”; and “[0043]…computer 6 functions as an image acquisition section 21, a detection section 22, an additional information determination section 23, an alignment section 24, and a display controller 25.”) and its steps are similar to the scope and functions performed by the system claim 1 and therefore claim 8 is also rejected with the same rationale as specified in the rejection of claim 1.
Claim 9 is directed to a non-transitory computer-readable storage medium storing an information processing program causing a computer to execute a process (Sakuragi “[0042] …the memory 12 stores the additional information display program. In addition, the memory 12 also becomes a work area when the additional information display program performs processes.”; and “[0043] …the CPU 11 executes the processes according to the program, the computer 6 functions as an image acquisition section 21, a detection section 22, an additional information determination section 23, an alignment section 24, and a display controller 25…additional information display device 2 may be provided with a plurality of processors that respectively perform the image acquisition process, the detection process, the additional information determination process, the alignment process, and the display control process.”) and its scope and functions are similar to the scope and functions performed by the system claim 1 and therefore claim 9 is also rejected with the same rationale as specified in the rejection of claim 1.
Regarding Claim 3, Sakuragi teaches wherein the processor is configured to perform control of displaying the relevant information in a region wider than the target region of interest in the visual field image (Sakuragi “[0048] On the HMD 7, an imaging direction of the camera 9 and the sight direction of the operator who wears the HMD 7 deviate from each other…so that the display 4…and the additional information A1 are appropriately aligned…the image…is magnified, is reduced, is moved in parallel, and is rotated…Thus, conversion parameters for magnifying, reducing, moving in parallel, and rotating the images are acquired…it is possible to cause an object seen through the HMD 7 and a position of the image of the object displayed on the HMD 7 to match each other.”; “[0054] The additional information determination section 23 decides the additional information A1 on the basis of the display image G1 in the above-described embodiment, but the additional information determination section 23 may determine the type of the display 4, for example, the size of the display 4 and the shape of a frame thereof, and may determine which image is displayed on the display 4 from the determination result, and may decide the additional information A1.”; and “[0055] In FIG. 8, the additional information A1 that is the aligned virtual endoscopic image is displayed along an outer frame of the display 4 on which the display image G1 that is the endoscopic image is displayed.”)
Regarding Claim 7, Sakuragi teaches wherein the second display device is a device that displays an image captured by at least one of an endoscope, a microscope, or a surgical field camera (Sakuragi “[0055] Here, in a case where the display 4 is such a type of display that displays an endoscope device, or in a case where the display image G1 is an endoscopic image…”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 2 and 4-5 are rejected under 35 U.S.C. 103 as being unpatentable and obvious over US Patent Application Publication US 2019/0304107 A1, (SAKURAGI) (hereinafter Sakuragi) in view of US Patent Application Publication US 2021/0382559 A1, (SEGEV et al.) (hereinafter Segev).
Regarding Claim 2, Sakuragi teaches specify a region of interest to be paid attention that is a region of interest to which the user pays attention in the visual field image and acquire, in a case where the region of interest to be paid attention is the target region of interest, the relevant information related to the target region of interest; and (Sakuragi Figs. 3-12, “[0038] …the operator who wears the HMD 7 observes an image displayed on the display 4, additional information relating to the image displayed on the display 4 is displayed on the HMD 7…”; “[0044] As shown in FIG. 3, a captured image G2 includes the display 4 on which a display image G1 is displayed. The display image G1 is an angiographic image, and includes blood vessels contrasted by a contrast agent, for example, in the heart.”; “[0045] The detection section 22 detects the display 4 from the captured image G2. The detection of the display 4 may be performed by detecting a rectangular area from the captured image G2.”; and “[0046] The additional information determination section 23 decides additional information relating to the display image G1…and additional information A1 is determined from the determination result.”) perform control of causing the first display device to display the relevant information to be superimposed on the target region of interest (Sakuragi “[0051] The display controller 25 displays the aligned additional information A1 on the HMD 7. Here, a position where the additional information A1 in the HMD 7 is to be displayed is converted so that the additional information A1 is aligned to the display image G1 included in the display 4 that is seen by the operator who wears the HMD 7 through the HMD 7…the converted additional information A1 is displayed on the display image G1 of the display 4 that is seen by the operator through the HMD 7 in an overlapping manner.”; and “[0058] …the alignment section 24 may receive a command from an input through the input section 14, and may select whether to convert the captured image G2 so that the shape of the display 4 becomes rectangular or to align the shape of the additional information A1 with the captured image G2.”)
However, Sakuragi is silent about acquiring visual line information of a user.
Segev teaches wherein the processor is configured to: acquire visual line information of the user; and based on the visual line information; (Segev “[0097] Surgeon 120 views the images via HMD 102…Surgeon 120 may switch the system mode to control a selected set of system parameters by…performing an eye motion that is tracked by eye tracker 136…”)
Sakuragi and Segev are analogous art as both are related to information display devices or systems.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Sakuragi by acquiring visual line information of a user as taught by Segev and use that within Sakuragi’s information display device.
The motivation for the above is for enhancing the experience of a user by displaying information through detected eye motion.
Regarding Claim 4, Sakuragi teaches wherein the processor is configured to and the relevant information on the first display device (Sakuragi “[0048] Accordingly, in a case where the additional information A1 is displayed on the HMD 7…”)
However, Sakuragi is silent about switching whether or not to display, in response to an instruction from the user.
Segev teaches switch whether or not to display and in response to an instruction from the user (Segev Figs. 2B-3A, “[0097] …Surgeon 120 views the images via HMD 102, and controls one or more settings of system 100 through the system modes…Surgeon 120 may switch the system mode to control a selected set of system parameters by performing a handless gesture, such as a head gesture while pressing on footswitch 104, performing an eye motion that is tracked by eye tracker 136, or by using acoustic driven means such as voice control, and the like.”)
Sakuragi and Segev are analogous art as both are related to information display devices or systems.
Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Sakuragi by switching whether or not to display information in response to an instruction from the user as taught by Segev and use that within Sakuragi’s information display device.
The motivation for the above is for allowing user input for controlling display information output, thus improving user experience.
Regarding Claim 5, Sakuragi modified by Segev teaches wherein the processor is configured to receive a predetermined operation or utterance of the user as the instruction from the user (Segev “[0099] …in one system mode, surgeon 120 may zoom-in by pressing on the first pedal of footswitch 104 while moving his head upwards. As a result, surgeon 120 sees a smaller size of surgical field 124, magnified over a larger portion of his field of view.”)
Response to Arguments
Applicant’s arguments filed 12 March 2026 have been fully considered but they are not persuasive.
Applicant argues Sakuragi (U.S. Pub. 2019/0304107) does not disclose a content displayed on a second display device is superimposed on a region of the second display device included in a visual field image. Thus, Sakuragi differs from claim 1, at least in that Sakuragi superimposes information that is not originally present in the region of a display 4 shown on the HMD 7, rather than enhancing the visibility of information originally displayed in that region.
Examiner replies that Sakuragi expressly states “[0047] …the alignment is performed so that the additional image Al that is the ray-sum image is superimposed with a corresponding blood vessel of the display image G1 included in the display 4.”; “[0051] The display controller 25 displays the aligned additional information A1 on the HMD 7. Here, a position where the additional information A1 in the HMD 7 is to be displayed is converted so that the additional information A1 is aligned to the display image G1 included in the display 4 that is seen by the operator who wears the HMD 7 through the HMD 7…the converted additional information A1 is displayed on the display image G1 of the display 4 that is seen by the operator through the HMD 7 in an overlapping manner.”; “[0058] …the alignment section 24 may receive a command from an input through the input section 14, and may select whether to convert the captured image G2 so that the shape of the display 4 becomes rectangular or to align the shape of the additional information A1 with the captured image G2.”; “[0036] The display 4 corresponds to first display unit.”; and “[0037] The HMD 7 corresponds to second display unit.”; “[0042] … the additional information display program regulates an image acquisition process of acquiring a captured image acquired by imaging the display 4 on which an image is displayed by the camera 9…”; “[0056] In FIG. 9, the additional information A1 that is a model image of the liver including blood vessels and tumors of the liver is displayed on the liver included in the display image G1 displayed on the display 4 in an overlapping manner.”; and “[0053] …display 4 is detected from the captured image G2 acquired by imaging the display 4, the additional information A1 relating to the display image G1 included in the detected display 4 is determined, the detected display 4 and the additional information A1 are aligned, and the aligned additional information A1 is displayed on the HMD 7 is provided…”. Thus, Sakuragi teaches content displayed on a second display device is superimposed on a region of the second display device included in a visual field image.
Regarding the remaining arguments: Applicant argues with respect to the amended claim language, which is fully addressed in the prior art rejections.
Conclusion: The rejections set forth in the previous Office Action are shown to have been proper, and the claims are rejected above. Inasmuch as new citations and parenthetical remarks can be considered new grounds of rejection, such new grounds of rejection are necessitated by Applicant’s amendments to the claims. Therefore, the present Office Action is made final.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMELIA VELAZQUEZ VALENCIA whose telephone number is (571)272-7418. The examiner can normally be reached M-F, 8:30AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.V.V/Examiner, Art Unit 2612
/Said Broome/Supervisory Patent Examiner, Art Unit 2612
Date: 03/24/2026