Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant’s Response
In the Applicant’s response dated 1/22/26, the Applicant amended and argued Claims 1, 16 and 19 previously rejected in the Office Action dated 10/27/25. Claims 1-20 are pending examination.
In light of the Applicant’s amendments and remarks, the 35 USC 102 rejections have been withdrawn.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/22/26 has been entered.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 6-13, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Ross et al., United States Patent No. 6608628 (hereinafter “Ross”), in view of Ahmed, United States Patent Publication 20170195377.
Claim 1:
Ross discloses:
A method for analyzing medical image data in a virtual multi-user collaboration, wherein the medical image data is analyzed by at least two users (A, N, C, S) and (see column 4 lines 17-24). Ross teaches the medical image data is viewed and manipulated by multiple users,
each user having his/her own workspace, wherein the workspace is a XR-Workspace (see column 4 lines 17-24). Ross teaches enabling a number of geographically distributed users to collaboratively view and manipulate high-quality, high-resolution, 3D images of anatomical objects based on tomographic data. The method and apparatus are part of a virtual interactive imaging System.
the method comprising:
providing medical image data including 3D or 4D image information (see column 4 lines 17-24). Ross teaches providing 3D medical image data,
loading the medical image data into the workspace of each user (A, N, C, S) so as to simultaneously display a visualization of the medical image data to each user (see column 4 lines 38-42). Ross teaches a virtual collaborative clinic environment, which allows the users of multiple, remotely-located computer Systems to collaboratively and Simultaneously view and manipulate the high-resolution, 3D images of an object in real-time.,
allowing at least one user (A, N, C, S) to execute an analyzing process of the medical image data in his/her workspace using a real tool or a virtual tool to generate a result of the analyzing process, wherein the result of the analyzing process can be shared with at least one other workspace but the execution of the analyzing process is not shared with any other workspace (see column 4 lines 33-42, column 11 lines 52-56 and column 13 lines 29-35). Ross teaches a virtual surgery cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. Ross also teaches the user can independently manipulate the images and the tool and wait for the transmission function to be executed,
displaying the result of the analyzing process in the workspace in which the analyzing process was carried out (see column 4 lines 38-47). Ross teaches displaying the rendered images after analysis. Ross recites the images may be rendered in four dimensions (4D), wherein the fourth dimension is time; that is, a chronological Sequence of images of an object is displayed to Show changes of the object over time (i.e., an animation of the object is displayed), and
synchronizing the result of the analyzing process in real-time with the at least one other workspace such that each workspace displays the result of the analyzing process in the respective individual visualization of the medical image data (see column 4 lines 38-47). Ross teaches providing a virtual collaborative clinic environment, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, 3D images of an object in real-time. The images may be rendered in four dimensions (4D), wherein the fourth dimension is time; that is, a chronological Sequence of images of an object is displayed to Show changes of the object over time (i.e., an animation of the object is displayed).
Ross fails to explicitly teach locally performing the analyzing and view manipulations.
Ahmed discloses:
wherein, during execution of the analyzing process, movements, trajectories, tool paths, intermediate actions, and view manipulations remain local to the executing workspace and are not shared with any other workspace (see paragraph [0194], [0223] and [0248]). Ahmed teaches analyzing the image, processing the images and performing functions are done locally but the results are shared in a collaborative session.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Ross to include local execution of analyzing, view manipulations, etc. for the purpose of efficiently executing local functions in a collaborative session, as taught by Ahmed.
Claim 2:
Ross discloses:
wherein the allowing at least one user (A, N, C, S) to execute an analyzing process of the medical image data in his/her workspace comprises allowing at least two users to execute each a respective analyzing process of the medical image data in their respective workspace simultaneously (see column 4 lines 38-47, column 11 lines 52-56 and column 13 lines 29-35). Ross teaches allowing users to simultaneously manipulate and analyze the images. Ross also teaches the user can independently manipulate the images and the tool and wait for the transmission function to be executed.
Claim 6:
Ross discloses:
wherein the virtual environment includes at least one virtual control element (see column 5 lines 1-10). Ross teaches a virtual control element such as a virtual scalpel.
Claim 7:
Ross discloses:
wherein the step of allowing each user (A, N, C, S) individually and independently change the visualization of the medical image data includes the use of a controller in order to execute the change of the visualization using hand gestures, preferably by grabbing an object in the workspace (see column 12 lines 15-23). Ross teaches allowing users to individually and independently change visualization by grabbing and interacting using hands.
Claim 8:
Ross discloses:
wherein the allowing each user (A, N, C, S) to individually and independently change the visualization of the medical image data comprises manipulating the visualization so as to rotate the visualization, cut away a part of the visualization, change rendering parameters of the visualization, change image settings of the visualization, change a contrast of the visualization, change voxel intensity thresholds of the visualization and/or change a size of the visualization (see column 13 lines 11-21). Ross teaches allowing user to independently change visualization of medical data by changing colors or orientation.
Claim 9:
Ross discloses:
wherein at least one user (A, N, C, S) may adopt the change(s) of the visualization of the medical image data made by another user, or wherein one user may force at least one other user to adopt his/her change(s) of the visualization of the medical image data (see column 13 lines 45 – column 14 line 3). Ross teaches adopting the changes of the visualization of another user or the user being able to change the visualization.
Claim 10:
Ross discloses:
wherein the allowing at least one user (A, N, C, S) to execute an analyzing process of the medical image data in his/her workspace further includes taking 3D measurements, executing MPR-Mode and/or inserting annotation (see column 8 lines 34-40 and column 13 lines 62-65). Ross teaches allowing user to have a high level of detail and perform virtual surgery in a 3D space therefore taking measurements is obvious.
Claim 11:
Ross discloses:
wherein the allowing at least one user (A, N,C, S) to execute the analyzing process of the medical image data in his/her workspace further includes positioning at least one model of a medical device, specifically an implant, within the visualization of the medical image data so as to determine its operational position (see column 11 lines 10-16). Ross teaches analyzing the medical image which includes the positioning of an implant while using algorithms/models; and
wherein the operational position of the model of the medical device is be determined by visualizing the medical device dynamically in operation (see column 4 lines 42-47 and column 11 lines 2-19). Ross teaches operational position coordinates based on the model and 4D images.
Claim 12, 15:
Although Claim 12 is a computer-readable medium claim and Claim 15 is a user interface device, they interpreted and rejected for the same reasons as Claim 1.
Claim 13, 16:
Although Claim 13 and 16 are computer-readable medium claims, they are interpreted and rejected for the same reasons as Claim 1, 2.
Claims 3-5, 14, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ross, in view of Smurro, United States Patent Publication 20180322254.
Claim 3:
Ross fails to expressly disclose enable and disable collaborative data.
Smurro discloses:
wherein the displaying of the result of the analyzing process may be selectively and individually enabled and disabled by a user (A, N, C, S) in his/her workspace (see paragraphs [0081]-[0083]). Smurro teaches allowing a user to enable/disable communications of the collaborative data.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the method disclosed by Ross to include an enable/disable of collaborative data for the purpose of being user friendly, as taught by Smurro.
Claim 4:
Ross fails to expressly disclose augmented reality or mixed reality.
Smurro discloses:
wherein at least one workspace is an AR-workspace or MR-workspace, and wherein at least one visualization parameter within the AR- or MR-workspace, in particular a transparency and/or a color of the visualization of the medical image data and/or the result of the analyzing process is/are adjusted automatically, so as to allow the user (A, N, C, S) to view the visualization of the medical image data and/or the result of the analyzing process superposed on the real environment with target contrast (see paragraph [0050]). Smurro teaches augmented reality and allowing the user the select colors of annotations that are superimposed on the environment to create visually distinct data.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the method disclosed by Ross to include augmented reality data and color contrast for annotations for the purpose of being user friendly, as taught by Smurro.
Claim 5:
Ross discloses:
allowing each user to individually and independently adjust a visualization parameter of the virtual environment (see column 13 lines 45 – column 14 line 3). Ross teaches adopting the changes of the visualization of another user or the user being able to change the visualization.
Ross fails to expressly disclose augmented reality or mixed reality.
Smurro discloses:
wherein each workspace has its own virtual environment in which the visualization of the medical image data and the result of the analyzing process are displayed, and wherein the method further comprises: allowing each user to individually and independently adjust a visualization parameter of the virtual environment so as to adjust a contrast within the workspace (see paragraph [0050]). Smurro teaches augmented reality and allowing the user the select colors of annotations that are superimposed on the environment to create visually distinct data.
Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the method disclosed by Ross to include augmented reality data and color contrast for annotations for the purpose of being user friendly, as taught by Smurro.
Claim 14:
Although Claim 14 is user-interface device claim, it is interpreted and rejected for the same reasons as Claim 5.
Claims 17-19:
Although Claims 17-19 are computer-readable medium claims, they are interpreted and rejected for the same reasons as Claim 3-5.
Claim 20:
Ross discloses:
wherein the virtual environment comprises at least one virtual control element (see column 5 lines 1-10). Ross teaches a virtual control element such as a virtual scalpel.
Response to Arguments
Applicant’s arguments, see REM, filed 1/22/26, with respect to the rejection(s) of claims 1-20 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Ross, in view Ahmed.
Claim 1:
Applicant argues Notably, however, Ross does not disclose that "the execution of the analyzing process is not shared with any other workspace." Each of the cited passages actually describes shared, real-time manipulation and propagation of user actions/inputs across clients.
The Examiner agrees.
Ross does not teach the executing of the analysis locally and sharing the results to other workspaces. The Examiner added new art, Ahmed, to teach the argued limitation. Ahmed teaches analyzing the image, processing the images and performing functions are done locally but the results are shared in a collaborative session (see paragraph [0194], [0223] and [0248]). See the above rejection for Claim 1.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIONNA M BURKE whose telephone number is (571)270-7259. The examiner can normally be reached M-F 8a-4p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571)272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIONNA M BURKE/Examiner, Art Unit 2178 2/7/26