Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. This action is responsive to applicant’s amendment dated 12/30/2025.
2. Claims 1-24 are pending in the case.
3. Claims 1 and 13 are independent claims.
Applicant’s Response
4. In Applicant’s response dated 12/30/2025, applicant has amended the following:
a) Claims 7 and 19
Based on Applicant’s amendments and remarks, the following objections previously set forth in Office Action dated 10/7/2025 are withdrawn:
a) Objections to claims 7 and 19
.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 7-9, 11-16, 19-21, 23 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Powderly et al. (hereinafter “Powderly”), U.S. Published Application No. 20170109936 A1 in view of Lang; Philipp, U.S. Published Application No. 20180116728 A1.
Claim 1:
Powderly teaches A computer-implemented method of interfacing with a plurality of objects in a three-dimensional virtual workspace environment, the method comprising, by a processor: (e.g., interfacing with virtual objects in 3D space par. 2; systems for interacting with virtual objects in the three-dimensional (3D) space.)
identifying a user intended contextual selection of a given object of the plurality of objects arranged in the three-dimensional virtual workspace environment, based on detected gaze tracking information of the user in the three-dimensional virtual workspace environment; (e.g., identifying an input mode selection of virtual objects in 3D space based on the user’s direction of gaze to select a virtual object more precisely (i.e., user intended contextual selection of a given object) par. 36; When the wearable system detects a dense cluster of virtual objects in the user's direction of gaze, the wearable system may give the user the option to switch the input control from head control to hand control. This way, the user can interact with the virtual objects more precisely.)
determining a command context based on the identified selected object, the determined command context comprising voice-activated commands, gesture-activated commands, or a combination thereof; (e.g., determining hand control gesture commands based on the identified contextual information of the selected object in 3D space par. 142; The wearable system can automatically select or recommend a mode (e.g., poses or hand gesture on a user input device) of the user's interaction based on the contextual information. The contextual information can include the type of the objects (e.g., physical or virtual), the layout of the objects (e.g., the density of the objects, the locations and sizes of the objects, and so forth), the user's characteristics, or the user's current interactions with objects in the environment, in combination or the like. For example, during ray casting (described with reference to FIG. 10), the wearable system may detect that a user is looking at multiple virtual objects located closely to each other. The wearable system can calculate the density of the virtual objects in the user's FOV. When the density passes a certain threshold, the wearable system can recommend the user to switch the mode of user interaction.)
and activating an object-specific action based on a command identified from the determined command context. (e.g., activating precise interactions on objects based on the hand gestures from the determined hand control gesture context par. 142; For example, when the density exceeds a certain threshold (which indicates that the objects are located very close to each other), the wearable system can switch the mode of user interaction from head pose to hand gestures on a user input device so as to allow more precise interactions with the objects. Par. 143; FIG. 16 illustrates an example of interacting with interactable objects with hand gestures on a user input device. par. 148; When the mode of user interaction is switched to the user input device, the user can actuate the user input device 1610 to interact with virtual objects. For example, the user can swipe along a path on the user input device 1610 which transports a cursor from position 1620 to the position 1624. Similarly, the user can actuate the user input device 1610 which moves the cursor (which may be in the shape of an arrow) from position 1620 to 1622. Besides these examples, the user may swipe along any type of paths (e.g., horizontal, vertical, or diagonal relative to the input device) or any type of directions (e.g., left or right, up or down, etc.) on the user input device 1610.)
Powderly fails to expressly teach a three-dimensional virtual medical imaging workspace environment.
However, Lang teaches a three-dimensional virtual medical imaging workspace environment. (e.g., a 3D virtual medical imaging workspace environment employing interface commands Par. 51; FIGS. 17A-D are illustrative flow charts of select options and approaches for performing spine surgery in a mixed reality environment according to some embodiments of the present disclosure. Par. 62; The term live data of the patient, as used herein, includes the surgical site, anatomy, anatomic structures or tissues and/or pathology, pathologic structures or tissues of the patient as seen by the surgeon's or viewer's eyes without information from virtual data, stereoscopic views of virtual data, or imaging studies. Par. 100; HoloLens utilizing the HPU can employ sensual and natural interface commands—voice, gesture, and gesture. Gaze commands, e.g. head-tracking, allows the user to bring application focus to whatever the user is perceiving. Par. 101; The HoloLens shell utilizes many components or concepts from the Windows desktop environment. A bloom gesture for opening the main menu is performed by opening one's hand, with the palm facing up and the fingers spread. Windows can be dragged to a particular position, locked and/or resized. Virtual windows or menus can be fixed at locations or physical objects. Virtual windows or menus can move with the user or can be fixed in relationship to the user. Or they can follow the user as he or she moves around.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D environment for employing interface commands as taught by Powderly to include an environment intended for medical imaging as taught by Lang, with a reasonable expectation of success, to provide the benefit improving surgical procedures with visual guidance using an optical head mounted display. (see Lang; par. 2, par. 3)
Claim 2 depends on claim 1:
Powderly/Lang teaches wherein at least a subset of the plurality of objects comprises imaging study panes. (e.g., virtual windows (i.e., imaging study panes) Lang; Par. 101; The HoloLens shell utilizes many components or concepts from the Windows desktop environment. A bloom gesture for opening the main menu is performed by opening one's hand, with the palm facing up and the fingers spread. Windows can be dragged to a particular position, locked and/or resized. Virtual windows or menus can be fixed at locations or physical objects. Virtual windows or menus can move with the user or can be fixed in relationship to the user. Or they can follow the user as he or she moves around.)
Claim 3 depends on claim 2:
Powderly/Lang teaches wherein activating an object-specific action comprises invoking a hanging protocol for a selected imaging study. (e.g., virtual windows dragged to a particular position and fixed in relationship (i.e., hanging protocol) to the user Lang; Par. 101; The HoloLens shell utilizes many components or concepts from the Windows desktop environment. A bloom gesture for opening the main menu is performed by opening one's hand, with the palm facing up and the fingers spread. Windows can be dragged to a particular position, locked and/or resized. Virtual windows or menus can be fixed at locations or physical objects. Virtual windows or menus can move with the user or can be fixed in relationship to the user. Or they can follow the user as he or she moves around.)
Claim 4 depends on claim 2:
Powderly/Lang teaches wherein activating an object-specific action comprises invoking an image manipulation tool. (e.g., activating toolbar (i.e., image manipulation tool) associated with an object Powderly; par. 116; The user can interact with objects within his FOV using a variety of techniques, such as e.g., by selecting the objects, moving the objects, opening a menu or toolbar associated with an object, or choosing a new set of selectable objects. Par. 121; The user can perform a series of user interface operations on the target interactable object. These operations can sometimes be referred to as interaction events. An interaction event can comprise, for example, resizing the interactable object )
(e.g., command to resize virtual window (i.e., invoking an image manipulation tool) Lang; par. 101; Windows can be dragged to a particular position, locked and/or resized.)
Claim 7 depends on claim 1:
Powderly/Lang teaches wherein at least a subset of the plurality of objects comprises imaging workflow panes. (e.g., virtual windows (i.e., imaging workflow panes) Lang; Par. 101; The HoloLens shell utilizes many components or concepts from the Windows desktop environment. A bloom gesture for opening the main menu is performed by opening one's hand, with the palm facing up and the fingers spread. Windows can be dragged to a particular position, locked and/or resized. Virtual windows or menus can be fixed at locations or physical objects. Virtual windows or menus can move with the user or can be fixed in relationship to the user. Or they can follow the user as he or she moves around.)
Claim 8 depends on claim7:
Powderly/Lang teaches wherein the imaging workflow panes comprise a study navigation pane, a communication pane, a reference pane, a patient data pane, or any combination thereof. (e.g., virtual windows (i.e., imaging workflow panes) Lang; Par. 101; The HoloLens shell utilizes many components or concepts from the Windows desktop environment. A bloom gesture for opening the main menu is performed by opening one's hand, with the palm facing up and the fingers spread. Windows can be dragged to a particular position, locked and/or resized. Virtual windows or menus can be fixed at locations or physical objects. Virtual windows or menus can move with the user or can be fixed in relationship to the user. Or they can follow the user as he or she moves around.)
Claim 9 depends on claim 1:
Powderly fails to expressly teach wherein activating the object-specific action comprises locking the identified object for primacy within the three-dimensional virtual environment based on a detected voice command.
However, Lang teaches wherein activating the object-specific action comprises locking the identified object for primacy within the three-dimensional virtual environment based on a detected voice command. (e.g., locking virtual windows based on gestures or voice commands Lang; par. 100; Any virtual application or button can be are selected using an air tap method, similar to clicking a virtual computer mouse. The tap can be held for a drag simulation to move an display. Voice commands can also be utilized. Par. 101; The HoloLens shell utilizes many components or concepts from the Windows desktop environment. A bloom gesture for opening the main menu is performed by opening one's hand, with the palm facing up and the fingers spread. Windows can be dragged to a particular position, locked and/or resized. Virtual windows or menus can be fixed at locations or physical objects. Virtual windows or menus can move with the user or can be fixed in relationship to the user. Or they can follow the user as he or she moves around.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D environment employing interface commands as taught by Powderly to include locking virtual windows based on gestures or voice commands as taught by Lang, with a reasonable expectation of success, to provide the benefit improving surgical procedures with visual guidance using an optical head mounted display. (see Lang; par. 2, par. 3)
Claim 11 depends on claim 1:
Powderly/Lang teaches further comprising, displaying to a user, the three-dimensional virtual medical imaging workspace including the plurality of objects. (e.g., displaying a plurality of virtual windows (i.e., imaging study panes) Lang; Par. 101; The HoloLens shell utilizes many components or concepts from the Windows desktop environment. A bloom gesture for opening the main menu is performed by opening one's hand, with the palm facing up and the fingers spread. Windows can be dragged to a particular position, locked and/or resized. Virtual windows or menus can be fixed at locations or physical objects. Virtual windows or menus can move with the user or can be fixed in relationship to the user. Or they can follow the user as he or she moves around.)
Claim 12 depends on claim 1:
Powderly/Lang teaches further comprising, identifying a command from the determined command context based on the detected user voice or gesture data. (e.g., activating precise interactions on objects based on the hand gestures from the determined hand control gesture context (i.e., gesture data) Powderly; par. 142; For example, when the density exceeds a certain threshold (which indicates that the objects are located very close to each other), the wearable system can switch the mode of user interaction from head pose to hand gestures on a user input device so as to allow more precise interactions with the objects. Par. 143; FIG. 16 illustrates an example of interacting with interactable objects with hand gestures on a user input device. par. 148; When the mode of user interaction is switched to the user input device, the user can actuate the user input device 1610 to interact with virtual objects. For example, the user can swipe along a path on the user input device 1610 which transports a cursor from position 1620 to the position 1624. Similarly, the user can actuate the user input device 1610 which moves the cursor (which may be in the shape of an arrow) from position 1620 to 1622. Besides these examples, the user may swipe along any type of paths (e.g., horizontal, vertical, or diagonal relative to the input device) or any type of directions (e.g., left or right, up or down, etc.) on the user input device 1610.)
Independent Claim 13
Claim 13 is substantially encompassed in claim 1, therefore, Examiner relies on the same rationale set forth in claim 1 to reject claim 13.
Claim 14 depends on claim 13:
Claim 14 is substantially encompassed in claim 2, therefore, Examiner relies on the same rationale set forth in claim 2 to reject claim 14.
Claim 15 depends on claim 14:
Claim 15 is substantially encompassed in claim 3, therefore, Examiner relies on the same rationale set forth in claim 3 to reject claim 15.
Claim 16 depends on claim 15:
Claim 16 is substantially encompassed in claim 4, therefore, Examiner relies on the same rationale set forth in claim 4 to reject claim 16.
Claim 19 depends on claim 13:
Claim 19 is substantially encompassed in claim 7, therefore, Examiner relies on the same rationale set forth in claim 7 to reject claim 19.
Claim 20 depends on claim 19:
Claim 20 is substantially encompassed in claim 8, therefore, Examiner relies on the same rationale set forth in claim 8 to reject claim 20.
Claim 21 depends on claim 13:
Claim 21 is substantially encompassed in claim 9, therefore, Examiner relies on the same rationale set forth in claim 9 to reject claim 21.
Claim 23 depends on claim 13:
Claim 23 is substantially encompassed in claim 11, therefore, Examiner relies on the same rationale set forth in claim 11 to reject claim 23.
Claim 24 depends on claim 13:
Claim 24 is substantially encompassed in claim 12, therefore, Examiner relies on the same rationale set forth in claim 12 to reject claim 24.
Claims 5, 10, 17 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Powderly/Lang as cited above, in further view of Akmal et al. (hereinafter “Akmal”), U.S. Published No. 20230315385 A1 which claims priority to provisional application No. 63362455 filed 4/4/2022.
Claim 5 depends on claim 2:
Powderly/Lang fails to expressly teach wherein activating an object-specific action comprises invoking a dictation annotation tool.
However, Akmal teaches wherein activating an object-specific action comprises invoking a dictation annotation tool. (e.g., invoking dictation annotation tool based on gaze and speech input par. 120; For example, as discussed in more detail below, in response to detecting speech input from the user of the computer system 101 while the gaze of the user is directed toward the first representation 704 of the first user in the three-dimensional environment 702, the computer system 101 initiates a process to send a message to the first user. In some embodiments, as described below, no input other than the attention of the user directed toward the respective representation of the respective user in the three-dimensional environment and/or the speech input from the user of the computer system 101 is required to initiate the process to send the message to the respective user.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D environment employing interface commands as taught by Powderly/Lang to include invoking dictation commands as taught by Akmal, with a reasonable expectation of success, to provide the benefit improving collaborative communication amongst users in effort to improve the user experience within a 3D environment. (see Akmal; paras. 3-5)
Claim 10 depends on claim 1:
Powderly/Lang fails to expressly teach wherein activating the object-specific action comprises invoking a dictation mode based on a detected voice command and terminating the dictation mode based on a detected gesture command.
However, Akmal teaches wherein activating the object-specific action comprises invoking a dictation mode based on a detected voice command . (e.g., invoking dictation annotation tool based on the speech input from the user being required to begin with a word corresponding to the name of a recipient par. 120; For example, as discussed in more detail below, in response to detecting speech input from the user of the computer system 101 while the gaze of the user is directed toward the first representation 704 of the first user in the three-dimensional environment 702, the computer system 101 initiates a process to send a message to the first user. In some embodiments, as described below, no input other than the attention of the user directed toward the respective representation of the respective user in the three-dimensional environment and/or the speech input from the user of the computer system 101 is required to initiate the process to send the message to the respective user. Par. 125; As shown in FIG. 7A, the first speech input 716a optionally begins with the word “John,” which corresponds to the name associated with the first user. In some embodiments, when initiating a process to send an initial message to a respective user, the speech input from the user is required to begin with a word corresponding to the name of the respective user, while the attention of the user satisfied the one or more first criteria discussed above) and terminating the dictation mode based on a detected gesture command. (e.g., “looking away: gesture to terminate dictation mode par. 135; In some embodiments, in response to detecting the attention of the user directed away from the message dictation platter 709b in three-dimensional environment 702, the computer system 101 ceases elapsing the countdown of the timer to send the message to the second user. For example, in FIG. 7E, the gaze 723 of the user of the computer system 101 is directed away from the second message dictation platter 709b (e.g., is directed to empty space in three-dimensional environment 702), which causes the computer system 101 to pause the countdown associated with the timer indication 714b in three-dimensional environment 702.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the 3D environment employing interface commands as taught by Powderly/Lang to include invoking and terminating dictation commands as taught by Akmal, with a reasonable expectation of success, to provide the benefit improving collaborative communication amongst users in effort to improve the user experience within a 3D environment. (see Akmal; paras. 3-5)
Claim 17 depends on claim 15:
Claim 17 is substantially encompassed in claim 5, therefore, Examiner relies on the same rationale set forth in claim 5 to reject claim 17.
Claim 22 depends on claim 13:
Claim 22 is substantially encompassed in claim 10, therefore, Examiner relies on the same rationale set forth in claim 10 to reject claim 22.
Claims 6 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Powderly/Lang as cited above, in further view of Henderson et al. (hereinafter “Henderson”), U.S. Patent No. 5072412 A.
Claim 6 depends on claim 2:
Powderly/Lang fails to expressly teach linking a location within a selected pane.
However, Henderson teaches linking a location within a selected pane. (see abstract; Each workspace's data structure includes, for each window in that workspace, a linking data structure called a placement which links to the display system object which provides that window, which may be a display system object in a preexisting window system. Col. 5 line 62; The user may, for example, transfer the pictogram representing that display object between workspaces in the overview, in which case a new placement is created linking to the new workspace. The user can also move the pictogram into an included workspace, in which case it will appear to be transferred into each of the other workspaces which include that workspace.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the virtual windows within a workspace as taught by Powderly/Lang to include data objects linked between workspaces as taught by Henderson to provide the benefit aiding a user in navigation between a plurality of workspaces. (see Henderson; col. 4 lines 38-50)
Therefore, Powderly/Lang/Henderson teaches further comprising: linking a location within a selected imaging study pane to data associated with a complementary study pane displayed in the virtual environment; and displaying the linked data in the complementary study pane.
Claim 18 depends on claim 15:
Claim 18 is substantially encompassed in claim 6, therefore, Examiner relies on the same rationale set forth in claim 6 to reject claim 18.
Response to Arguments
Applicant's arguments filed 12/30/2025 have been fully considered but they are not persuasive.
Prior Art Rejections
1)Applicant respectfully submits that detecting a "dense cluster of virtual objects" is distinct from "identifying a user intended contextual selection of a given object of the plurality of objects," as required in Applicant's Claim 1, as a cluster of objects is many objects, which is inherently distinct from "a given object." (see Response; page 7)
Examiner respectfully disagrees.
Examiner submits that the “hand control” mode allows for input to select “a given object”. Therefore, using the user’s gaze to determine the “hand control mode” to allow the user to select “a given object” from the plurality of objects within the cluttered layout as taught by Powderly teaches or suggests “identifying a user intended contextual selection of a given object of the plurality of objects," as required in Applicant's Claim 1.
2)Applicant respectfully submits that the functionality of detecting when "a user is looking at multiple virtual objects calculate[ing] the density [and] recommend[ing] the user to switch the mode of user interaction" as provided for in Powderly does not teach or disclose Applicant's Claim 1 functionality of "determining a command context based on the identified selected object." (see Response; page 8)
Examiner respectfully disagrees.
Powderly teaches contextual information can include the types of objects (e.g., physical or virtual) (see par. 142; The contextual information can include the type of the objects (e.g., physical or virtual) or the user's current interactions with objects in the environment, in combination or the like). Examiner submits that determining hand control gesture commands based on the type of object selected from user interaction as taught by Powderly teaches or suggests "determining a command context based on the identified selected object."
For at least the foregoing reasons, Examiner maintains prior art rejections.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HENRY ORR whose telephone number is (571)270-1308. The examiner can normally be reached 9AM-5PM EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571)272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HENRY ORR/Primary Examiner, Art Unit 2172