DETAILED ACTION
This office action is in response to the application filed on 09/30/2024. Claims 2-21 are pending and are examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant's claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged.
Information Disclosure Statement
The reference(s) listed on the Information Disclosure Statement(s) submitted on 09/30/2024 has/have been considered by the examiner (see attached PTO-1449).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 2-4, 13, and 18 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1, 9-10, and 14 of U.S. Patent No. 11,333,899 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because not only the scope of the features of the claims is the same, but also, for instance, the independent claims of the instant application are broader in scope than the corresponding Claims 1, 9-10, and 14 of U.S. Patent No. 11,333,899 B2.
Claims 2-4, 13, and 18 of the instant application are previously known and included in, in different permutations, the corresponding Claims 1, 9-10, and 14 of U.S. Patent No. 11,333,899 B2. As such, the invention is made obvious, since they were previously shown in the parent patented Claims, albiet in different embodiments.
Examiner further notes that any minor differences to the wording of the claims are merely a matter of semantics and do not carry significant patentable weight.
Claims 2, 4, 13, and 18 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-2 and 9-10 of U.S. Patent No. 11,754,853 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because not only the scope of the features of the claims is the same, but also, for instance, the independent claims of the instant application are broader in scope than the corresponding Claims 1-2 and 9-10 of U.S. Patent No. 11,754,853 B2.
Claims 2, 4, 13, and 18 of the instant application are previously known and included in, in different permutations, the corresponding Claims 1-2 and 9-10 of U.S. Patent No. 11,754,853 B2. As such, the invention is made obvious, since they were previously shown in the parent patented Claims, albiet in different embodiments.
Examiner further notes that any minor differences to the wording of the claims are merely a matter of semantics and do not carry significant patentable weight.
EXAMINER’S NOTE: After careful review and consideration of the parent application(s), the remaining parent application(s) are not given an obviousness type double patenting rejection because the instant application is narrower/differing in scope and/or includes non-obvious elements.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 4-5, 9, 13, 15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Rosenberg et al., US Patent Application Publication No.: 2019/0298481 A1, hereby Rosenberg, in view of Meglan, US Patent Application Publication No.: 2020/0169724 A1, hereby Meglan.
Rosenberg discloses the invention substantially as claimed. Regarding Claims 2, 4, 13, and 18, Rosenberg discloses a system, method, and a processor of a system (Figs. 1, 3-4, and 6-7), comprising:
“an autostereoscopic three-dimensional (3D) display configured to facilitate a user's visualization of 3D images on the 3D display (Figs. 1, 3, and 4, and paragraphs [0054]-[0060], [0065], and [0068], disclosing a robotic surgical system including a robotic arm and a patient image capture device (element 109) that captures stereoscopic images of the surgical site to be displayed on an autostereoscopic (indicating without the use of 3D glasses) three-dimensional display (with layers positioned over the display as seen in Fig. 4); see also Figs. 6-7);
at least one actuator; at least one processor; and memory having instructions which when executed by the at least one processor causes the system to signal the at least one actuator to adjust a spatial relationship between the 3D display and a user to . . . between the user and the 3D display, or to ensure that eyes of the user are substantially . . . with the 3D display to support the user's visualization of the 3D images on the 3D display (Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user).”
However, although Rosenberg does not expressly disclose the claimed prescribed distance of the centering limitation, Meglan does expressly disclose the following:
“. . . ; and
memory having instructions which when executed by the at least one processor causes the system to signal the at least one actuator to adjust a spatial relationship between the 3D display and a user to maintain a prescribed distance between the user and the 3D display, or to ensure that eyes of the user are substantially centered with the 3D display to support the user's visualization of the 3D images on the 3D display (Fig. 1, element 132, and Fig. 6, and paragraphs [0067], [0093], and [0099]-[0105]; examiner notes that paragraph [0101] discloses that the centering and distance determinations of Fig. 5, element 506 are included in Fig. 6, element 606).”
Accordingly, before the effective filing date, it would have been obvious to one of ordinary skill in the art, having the teachings of Rosenberg and Meglan (hereby Rosenberg-Meglan) to modify a system, method, and a processor of a system of Rosenberg to use the claimed prescribed distance of the centering limitation as in Meglan. The motivation for doing so would have been to create the advantage of maintaining a proper positional relationship between the observer and the display device for optimal perception of stereoscopic visual content provided by the display device (see Meglan, Figs. 1 and 6, and paragraphs [0067], [0093], and [0099]-[0105]; see also Fig. 5).
Regarding Claims 5 and 15, Rosenberg-Meglan discloses:
“at least one camera configured to capture images of the user (Rosenberg, Figs. 1 and 4, elements 128-129, and Figs. 6-7, and paragraphs [0051], [0060], [0071]-[0072], and [0076]-[0077], disclosing an image capture device (element 128) and wearable (element 129) that captures information related to head position and eyes/gaze of the user), wherein the memory comprises further instructions to detect a head or the eyes of the user based on the captured images, wherein the spatial relationship is adjusted or the eyes of the user (Rosenberg, Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user) are substantially centered based on the detected head or eyes of the user (Meglan, Fig. 1, element 132, and Fig. 6, and paragraphs [0067], [0093], and [0099]-[0105]; examiner notes that paragraph [0101] discloses that the centering and distance determinations of Fig. 5, element 506 are included in Fig. 6, element 606).”
The motivation that was utilized in Claims 2, 4, 13, and 18 applies equally as well here.
Regarding Claim 9, Rosenberg-Meglan discloses:
“a first camera that is configured to capture images of the user (Rosenberg, Figs. 1 and 4, elements 128-129, and Figs. 6-7, and paragraphs [0051], [0060], [0071]-[0072], and [0076]-[0077], disclosing an image capture device (element 128) and wearable (element 129) that captures information related to head position and eyes/gaze of the user); and
a second camera that is configured to capture the 3D images (Rosenberg, paragraph [0081], disclosing the robotic arm moves the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user's eye gaze is directed; the robotic arm moves the patient image capture device (element 109) to a position such that the image displayed on the autostereoscopic display permits the user's eyes to focus at the center of the image; see also Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077]), wherein the instructions to ensure that the eyes of the user are substantially centered (Meglan, Fig. 1, element 132, and Fig. 6, and paragraphs [0067], [0093], and [0099]-[0105]; examiner notes that paragraph [0101] discloses that the centering and distance determinations of Fig. 5, element 506 are included in Fig. 6, element 606) comprises instructions to:
detect the eyes of the user based on the captured images (Rosenberg, Figs. 1 and 4, elements 128-129, and Figs. 6-7, and paragraphs [0051], [0060], [0071]-[0072], and [0076]-[0077], disclosing an image capture device (element 128) and wearable (element 129) that captures information related to head position and eyes/gaze of the user);
determine a point or area of the 3D image within a display window on the display at which the user is focusing based on the detected eyes (Rosenberg, paragraph [0081], disclosing the robotic arm moves the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user's eye gaze is directed; the robotic arm moves the patient image capture device (element 109) to a position such that the image displayed on the autostereoscopic display permits the user's eyes to focus at the center of the image; see also Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077]); and
adjust the second camera such that the point or area of the 3D image is moved to a center of the display window (Rosenberg, paragraph [0081], disclosing the robotic arm moves the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user's eye gaze is directed; the robotic arm moves the patient image capture device (element 109) to a position such that the image displayed on the autostereoscopic display permits the user's eyes to focus at the center of the image; see also Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077]).”
The motivation that was utilized in Claims 2, 4, 13, and 18 applies equally as well here.
Claim Rejections - 35 USC § 103
Claims 3, 14, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Rosenberg-Meglan, and in further view of Hannaford et al., US Patent Application Publication No.: 2015/0025547 A1, hereby Hannaford.
Regarding Claim 3, Rosenberg-Meglan discloses:
“a . . . on which the user is to . . . , wherein the at least one actuator is configured to adjust the spatial relationship or to ensure (Rosenberg, Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user) that the eyes of the user are substantially centered (Meglan, Fig. 1, elements 122 and 132, and Fig. 6, and paragraphs [0067], [0093], and [0099]-[0105]; examiner notes that paragraph [0101] discloses that the centering and distance determinations of Fig. 5, element 506 are included in Fig. 6, element 606) by changing a position or an orientation of the . . . with respect to the 3D display (Rosenberg, Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user), responsive to the signal (Meglan, Fig. 1, elements 122 and 132, and Fig. 6, and paragraphs [0067], [0093], and [0099]-[0105]; examiner notes that paragraph [0101] discloses that the centering and distance determinations of Fig. 5, element 506 are included in Fig. 6, element 606).”
The motivation that was utilized in Claims 2, 4, 13, and 18 applies equally as well here.
However, although Rosenberg-Meglan does not expressly disclose the claimed seat, Hannaford does expressly disclose the following:
“a seat on which the user is to sit, wherein the at least one actuator is configured to adjust the spatial relationship or to ensure that the eyes of the user . . . by changing a position or an orientation of the seat with respect to the 3D display, responsive to the signal (Figs. 1 and 7, and paragraphs [0059] and [0076]).”
Accordingly, before the effective filing date, it would have been obvious to one of ordinary skill in the art, having the teachings of Rosenberg-Meglan and Hannaford (hereby Rosenberg-Meglan-Hannaford) to modify a system, method, and a processor of a system of Rosenberg-Meglan to use the claimed seat as in Hannaford. The motivation for doing so would have been to create the advantage of providing consistent and reliable movement and adjustments (including body posture adjustments) of the seat of the user in relation to the display (see Hannaford, Figs. 1 and 7, and paragraphs [0059] and [0076]).
Regarding Claims 14 and 19, Rosenberg-Meglan-Hannaford discloses:
“wherein the actuator is to adjust the spatial relationship (Rosenberg, Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user) or to ensure that the eyes of the user are substantially centered by changing a position or an orientation (Meglan, Fig. 1, elements 122 and 132, and Fig. 6, and paragraphs [0067], [0093], and [0099]-[0105]; examiner notes that paragraph [0101] discloses that the centering and distance determinations of Fig. 5, element 506 are included in Fig. 6, element 606) of at least one of 1) a seat on which the user is to sit with respect to the 3D display (Hannaford, Figs. 1 and 7, and paragraphs [0059] and [0076]), or 2) the 3D display with respect to the seat (Hannaford, Figs. 1 and 7, and paragraphs [0059] and [0076]), responsive to the signal (Meglan, Fig. 1, elements 122 and 132, and Fig. 6, and paragraphs [0067], [0093], and [0099]-[0105]; examiner notes that paragraph [0101] discloses that the centering and distance determinations of Fig. 5, element 506 are included in Fig. 6, element 606).”
The motivation that was utilized in Claim 3 applies equally as well here.
Claim Rejections - 35 USC § 103
Claims 6-7, 10-12, 16-17, and 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Rosenberg-Meglan, and in further view of Bychkov et al., US Patent Application Publication No.: 2014/0028548 A1, hereby Bychkov.
Regarding Claim 6, Rosenberg-Meglan discloses:
“wherein the at least one camera comprises a first camera configured to capture a first set of images of the head or the eyes of the user and . . . configured to capture a second set of images of the eyes of the user, wherein the instructions to detect the head or the eyes comprises instructions to (Rosenberg, Figs. 1 and 4, elements 128-129, and Figs. 6-7, and paragraphs [0051], [0060], [0071]-[0072], and [0076]-[0077], disclosing an image capture device (element 128) and wearable (element 129) that captures information related to head position and eyes/gaze of the user):
detect and track a position of the head or the eyes of the user based on the first set of images; and detect and track a gaze of the eyes of the user based on the second set of images (Rosenberg, Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user).”
However, although Rosenberg-Meglan does not expressly disclose the claimed additional camera for detecting/tracking, Bychkov does expressly disclose the following:
“wherein the at least one camera comprises a first camera configured to capture a first set of images of the head or the eyes of the user and a second camera configured to capture a second set of images of the eyes of the user, wherein the instructions to detect the head or the eyes comprises instructions to: detect and track a position of the head or the eyes of the user based on the first set of images; and detect and track a gaze of the eyes of the user based on the second set of images (Figs. 1-3, and paragraphs [0048]-[0056], disclosing an illumination and infrared imaging subassembly (Fig. 2, elements 50 and 52) that captures a 3D location of the user’s head, and a gaze sensor (Fig. 2, element 60) for detecting a gaze direction of the user; paragraph [0040], disclosing detecting illumination from the iris; for further support, see Claims 2-3, disclosing identifying the direction of the gaze comprises analyzing light reflected off an element of the eye, wherein the element is selected from a list comprising a pupil, an iris and a cornea; paragraph [0139], disclosing depth and gaze information collected by device can be used in enhancing the capabilities and user experience of 3D displays, particularly autostereoscopic displays; see also Figs. 5-6C).”
Accordingly, before the effective filing date, it would have been obvious to one of ordinary skill in the art, having the teachings of Rosenberg-Meglan and Bychkov (hereby Rosenberg-Meglan-Bychkov) to modify a system, method, and a processor of a system of Rosenberg-Meglan to use the claimed additional camera for detecting/tracking as in Bychkov. The motivation for doing so would have been to create the advantage of extracting the true gaze direction and to identify reliably the interactive item at which the user is looking (see Bychkov, Figs. 1-3, and paragraph [0056]; see also Figs. 5-6C).
Regarding Claim 7, Rosenberg-Meglan-Bychkov discloses:
“wherein the spatial relationship is adjusted to maintain the prescribed distance between the position of the head or the eyes of the user and the 3D display (Rosenberg, Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user),
wherein the eyes of the user are substantially centered according to the gaze of the eyes of the user (Meglan, Fig. 1, elements 122 and 132, and Fig. 6, and paragraphs [0067], [0093], and [0099]-[0105]; examiner notes that paragraph [0101] discloses that the centering and distance determinations of Fig. 5, element 506 are included in Fig. 6, element 606).”
The motivation that was utilized in Claims 2, 4, 13, and 18 applies equally as well here.
Regarding Claims 10, 16, and 20, Rosenberg-Meglan-Bychkov discloses:
“wherein the 3D display comprises a graphical user interface (GUI) that has a display window showing the 3D images and a control panel associated with an application, wherein the system further comprises a sensor configured to detect eyes of the user, wherein the memory comprises further instructions to: determine that the user is focusing on the control panel in the GUI based on the detected eyes; and responsive to determining that the user is focusing on the control panel, activate the application (Bychkov, Figs. 5-6C, and paragraphs [0077]-[0085], visually disclosing the claimed control panel having multiple displayed interactive items (i.e., plurality of displayed icons, elements 36B-36C), in which second interactive item(s)/application(s) associated with the displayed interactive item (i.e., application related to a displayed icon, elements 36B-36F) is/are activated when the processor determines the gaze of the user is directed at the icon; see also paragraph [0139], disclosing depth and gaze information collected by device can be used in enhancing the capabilities and user experience of 3D displays, particularly autostereoscopic displays; see also Figs. 1-3).”
Accordingly, before the effective filing date, it would have been obvious to one of ordinary skill in the art, having the teachings of Rosenberg-Meglan-Bychkov to modify a system, method, and a processor of a system of Rosenberg-Meglan to use the claimed interactive application as in Bychkov. The motivation for doing so would have been to create the advantage of creating an interactive user interface that can detect which on-screen interactive item the user is looking at (see Bychkov, Figs. 5-6C, and paragraphs [0075], [0077]-[0085], and [0139]; see also Figs. 1-3).
Regarding Claims 11 and 17, Rosenberg-Meglan-Bychkov discloses:
“wherein the memory comprises further instructions to: track, using the sensor, a gaze of the eyes of the user (Rosenberg, Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user); and
control one or more features of the application based on the gaze of the eyes of the user upon the control panel (Bychkov, Figs. 5-6C, and paragraphs [0077]-[0085], visually disclosing the claimed control panel having multiple displayed interactive items (i.e., plurality of displayed icons, elements 36B-36C), in which second interactive item(s)/application(s) associated with the displayed interactive item (i.e., application related to a displayed icon, elements 36B-36F) is/are activated when the processor determines the gaze of the user is directed at the icon; see also paragraph [0139], disclosing depth and gaze information collected by device can be used in enhancing the capabilities and user experience of 3D displays, particularly autostereoscopic displays; see also Figs. 1-3).”
The motivation that was utilized in Claim 6 applies equally as well here.
Regarding Claims 12 and 21, Rosenberg-Meglan-Bychkov discloses:
“wherein the memory comprises further instructions to deactivate the application in response to a determination that the user is not focusing on the control panel or is focusing on a close icon of the control panel (Bychkov, paragraph [0074]; Figs. 5-6C, and paragraphs [0077]-[0085], visually disclosing the claimed control panel having multiple displayed interactive items (i.e., plurality of displayed icons, elements 36B-36C), in which second interactive item(s)/application(s) associated with the displayed interactive item (i.e., application related to a displayed icon, elements 36B-36F) is/are activated when the processor determines the gaze of the user is directed at the icon; see also paragraph [0139], disclosing depth and gaze information collected by device can be used in enhancing the capabilities and user experience of 3D displays, particularly autostereoscopic displays; see also Figs. 1-3).”
Accordingly, before the effective filing date, it would have been obvious to one of ordinary skill in the art, having the teachings of Rosenberg-Meglan-Bychkov to modify a system, method, and a processor of a system of Rosenberg-Meglan to use the claimed deactivation of the application as in Bychkov. The motivation for doing so would have been to create the advantage of reducing power consumption (see Bychkov, paragraph [0074]; Figs. 5-6C, and paragraphs [0075], [0077]-[0085], and [0139]; see also Figs. 1-3).
Claim Rejections - 35 USC § 103
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Rosenberg-Meglan, and in further view of Payyavula et al., US Patent Application Publication No.: 2020/0015918 A1, hereby Payyavula.
Regarding Claim 8, Rosenberg-Meglan discloses:
“wherein the memory comprises further instructions to: determine, based on the detected eyes of the user, that a gaze of the user is . . . directed at the 3D display; and in response, activate . . . towards the 3D display (Rosenberg, Fig. 1, elements 122 and 132; Figs. 1, 4, and 6-7, and paragraphs [0051], [0060]-[0061], [0066], [0071]-[0072], [0076]-[0077], and [0081], disclosing a processor/controller (Fig. 1, elements 118/114) utilizes the captured images of the user from the image capture device (element 128) to determine the position of the user in relation to the display (based on eyes/gaze and head tracking information), can control the robotic arm (element 106) to move the patient image capture device (element 109) to a target location within the surgical site corresponding to the location on the image displayed on the autostereoscopic display at which the user’s eye gaze is directed, and can also reposition the display based on the determined position (including head/eye position) of the user).”
However, although Rosenberg-Meglan does not expressly disclose the claimed gaze not directed at the display and activating an alarm or notification, Payyavula does expressly disclose the following:
“wherein the memory comprises further instructions to: determine, based on the detected eyes of the user, that a gaze of the user is not directed at the 3D display; and in response, activate one or more alarms or notifications to get an attention of the user back towards the 3D display (Figs. 1 and 3, and paragraphs [0022], [0033], and [0047]-[0049]).”
Accordingly, before the effective filing date, it would have been obvious to one of ordinary skill in the art, having the teachings of Rosenberg-Meglan and Payyavula to modify a system, method, and a processor of a system of Rosenberg-Meglan to use the claimed gaze not directed at the display and activating an alarm or notification as in Payyavula. The motivation for doing so would have been to create the advantage of not only providing for safe and reliable mechanisms to determine when the teleoperational instruments should be responsive to operator movement of the remote instrument, but also to alert the operator when exiting the following mode (see Payyavula, Figs. 1-3, paragraphs [0005], [0022], [0033], and [0047]-[0049])..
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M WALSH whose telephone number is (571)270-0423. The examiner can normally be reached M-F 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHLEEN M WALSH/Primary Examiner, Art Unit 2482