Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 9/5/25 has been entered and made of record. Claims 1, 17 and 20 are amended. Claim 10 is cancelled. Claims 1-9 and 11-20 are pending.
Response to Arguments
Applicant’s arguments with respect to claims 1, 17 and 20 have been considered but they are not persuasive.
Applicant asserts that Applicant has amended the independent claims to recite that "the sub-volume is oriented in a desired way relative to a surgical site and an expected range of movement of the tracked object." The office action contends that Otto discloses that the field of view 20 and/or zones 32, 38, 110 can be represented by virtual models mixed with a real time image of the surgical field, however, this does not disclose all the elements of the independent claims, as amended, including the above-noted recitation” at p. 9 of Remarks.
Examiner notices that the amended limitation “the sub-volume is oriented in a desired way” includes a relative or subjective term without the further limitation to determine “a desired way”. It is unclear what way can be defined as a desired way. Since there is no criteria to define a “oriented in a desired way”, it is directed to an indefinite scope of the independent claims.
Examiner also notices that Otto teaches a tracking volume and the sub-volume, such as, “FIG. 2 is a perspective view of one example of a three-dimensional field of view of the camera, a first zone within the field of view at a first location with the first zone defining a range of acceptable positions for the first object, a second zone within the field of view at a second location with the second zone defining a range of acceptable positions for the second object, and a third zone within the field of view at a third location with the third zone defining a range of acceptable positions for a third object” in [0013]; “In some cases, the purposes may include aiding in placement of objects relative to a preferred zone or boundary within the field of view 20.” in [0023]; “As shown in FIG. 2, the three-dimensional field of view 20 comprises a trapezoidal configuration. The field of view 20 may be adjustable to any suitable shape or volume such that the field of view 20 encompasses any desired objects. Objects may include, but are not limited to, tables, surgical instruments, an anatomy, robotic manipulators, drapes, or any other component or subject of a surgical system” in [0043]; “the field of view 20 and/or zones 32, 38, 110 can be represented by virtual models (e.g., geometrical outlines) mixed with a real time image of the surgical field and objects or trackers 12, 14, 100 as captured by the video camera 130” in [0074]; optical sensors or other tracking sensors 28 of the camera 16 in Fig 2 as shown below:
PNG
media_image1.png
457
621
media_image1.png
Greyscale
“At step 210, the controller 18 may be configured to define a first zone 32 within the field of view 20 at a first location 34 with the first zone 32 defining a (first) range 36 of acceptable positions for the first object or tracker 12 relative to the position of the camera 16. Furthermore, the controller 18 may also be configured to define a second zone 38 with the field of view 20 at a second location 40 with the second zone 38 defining a (second) range 42 of acceptable positions for the second object or tracker 14 relative to the position of the camera 16” in [0061]. Here, the first/second/third zones are defined within the field of view and being adjusted to any volume to encompasses any desired objects. Thus, it can be determined that Otto’s sub-volumes is oriented in a desired way relative to any targets.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-9 and 11-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1, 17 and 20 recite limitation “wherein the sub-volume is oriented in a desired way relative to a surgical site and an expected range of movement of the tracked object”. It is unclear how to define the limitation “a desired way” without the cited criteria of determining “desired way”, which is directed to an indefinite scope of the claims. Claims 2-9, 11-16 and 18 -19 are dependent claims and rejected under the similar rationale.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7, 13, 16-17 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Otto et al. (US 2019/0321126 Case 277) in view of Lohr et al. (US 2020/0342673 from case 328) and PATERIYA (US 2021/0286427 A1).
As to Claim 1, Otto teaches a camera tracking system configured to:
obtain a model defining a tracking volume and a sub-volume of the tracking volume of a set of tracking cameras on an auxiliary tracking bar relative to pose of the set of tracking cameras, wherein the tracking volume defines a maximum volume trackable by the set of tracking cameras and the sub-volume defines a volume within the tracking volume and represents a volume with a predetermined higher tracking accuracy of a tracked object, wherein the sub-volume is oriented in a desired way relative to a surgical site and an expected range of movement of the tracked object (Otto discloses “FIG. 2 is a perspective view of one example of a three-dimensional field of view of the camera, a first zone within the field of view at a first location with the first zone defining a range of acceptable positions for the first object, a second zone within the field of view at a second location with the second zone defining a range of acceptable positions for the second object, and a third zone within the field of view at a third location with the third zone defining a range of acceptable positions for a third object” in [0013]; “In some cases, the purposes may include aiding in placement of objects relative to a preferred zone or boundary within the field of view 20.” in [0023]; “the field of view 20 and/or zones 32, 38, 110 can be represented by virtual models (e.g., geometrical outlines) mixed with a real time image of the surgical field and objects or trackers 12, 14, 100 as captured by the video camera 130” in [0074]; optical sensors or other tracking sensors 28 of the camera 16 in Fig 2; “At step 210, the controller 18 may be configured to define a first zone 32 within the field of view 20 at a first location 34 with the first zone 32 defining a (first) range 36 of acceptable positions for the first object or tracker 12 relative to the position of the camera 16. Furthermore, the controller 18 may also be configured to define a second zone 38 with the field of view 20 at a second location 40 with the second zone 38 defining a (second) range 42 of acceptable positions for the second object or tracker 14 relative to the position of the camera 16” in [0061]. Please note that it would be obvious to arrange these tracking cameras on a tracking bar. See also above Response to Arguments on “desired way”.)
Otto teaches a 3D tracking volume from a camera. Otto doesn’t explicitly teach optical markers disposed on the XR headset and pose of HMD. The combination of Lohr further teaches following limitation:
receive tracking information from the set of tracking cameras indicating pose of an extended reality (XR) headset relative to the set of tracking cameras based on tracking by the set of tracking cameras optical markers disposed on the XR headset, wherein the XR headset includes a display screen which is at least semi-transparent and a display unit that projects virtual content toward the display screen for reflection towards a user's eyes such that the user sees the virtual content superimposed on the user's view of a real-world scene that is being passed through the display screen (Otto discloses “the camera 16 is configured to determine a pose (i.e. a position and orientation) of one or more objects and/or trackers before and during a surgical procedure to detect movement of the object(s)… The camera 16 may include a detection device that obtains a pose of an object with respect to a coordinate system of the detection device. As the object moves in the coordinate system, the detection device tracks the pose of the object” in [0032]; “The tracking markers 26 may be passive, active or combinations thereof” in [0027]; “Combinations of real and virtual images can be utilized… For example, the field of view 20 and/or zones 32, 38, 110 can be represented by virtual models ( e.g., geometrical outlines) mixed with a real time image of the surgical field and objects or trackers 12, 14, 100 as captured by the video camera 130… The GUI 44 and corresponding image representations can be shown on the display(s) 118 or on a digital transparent lens 134 of a head-mounted device 132” in [0074]. Here, it is obvious that the transparent lens of a HMD may refer to at least a semi-transparent display. Lohr further discloses “In some instances, to detect the location and/or pose of the user the HMD may include markers” in [0025]; “In some instances, the marker(s) 134 may be used to determine a point-of-view of the user 100. For example, a distance between the marker(s) 134 and the eyes of the user 100 may be known. In capturing image data of the marker(s) 134, the tracking system 122 (and/or other communicatively coupled computing device) may determine the relative point-of-view of the user 100. Accordingly, the tracking system 122 may utilize the marker(s) 134 of the HMD 104 to determine a relative location and/or pose of the user 100 within the environment 102” in [0041]);
wherein the tracking information from the set of tracking cameras further indicates pose of the tracked object relative to the set of tracking cameras, and the camera tracking system is further configured to: determine the pose of the tracked object based on the tracking information (Otto discloses “to display image representations of the field of view, the first zone relative to the field of view, the second zone relative to the field of view, the position of the first object relative to the first zone, and the position of the second object relative to the second zone” in [0009]; “Additionally, the camera 16 is configured to determine a pose (i.e. a position and orientation) of one or more objects and/or trackers before and during a surgical procedure to detect movement of the object(s)” in [0032]; “For example, the field of view 20 and/or zones 32, 38, 110 can be represented by virtual models (e.g., geometrical outlines) mixed with a real time image of the surgical field and objects or trackers 12, 14, 100 as captured by the video camera 130….The GUI 44 and corresponding image representations can be shown on the display(s) 118 or on a digital transparent lens 134 of a head-mounted device 132 worn by the user setting up the camera 16” in [0074]; “the GUI 44 can produce image representations of the field of view 20 from a first view 20a and a second view 20b… the two views 20a, 20b may be any combination of the top, bottom, front, back, right, left and the like view of the three-dimensional field of view 20 relative to the camera 16” in [0077]. Lohr further discloses “the tracking system may determine the location and/or pose of the user” in [0024]; “images or data obtained from the tracking system may be used to generate a 3D model (or mesh) of the real-world environment” in [0027]; “the remote computing resources 142 may utilize the first image data received from the tracking system 122 to associate the depth map, or depth values of the environment 102, with certain locations within the area and/or the environment… the 3D model may be transmitted to the HMD 104 and/or the tracking system 122, and/or may be stored in memory of the HMD 104, the tracking system 122, and/or the remote computing resources 142” in [0070]; “knowing the pose of the user 100, the remote computing resources 142 may project or overlay the first image data onto the portion of the 3D model of the environment 102 to generate the third image data” in [0078]; “the remote computing resources 142 may transmit the third image data to the HMD 104, where the third image data represents the first image data as projected onto the portion of the 3D model” in [0079].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto with the teaching of Lohr so as to display the content with a viewpoint associated with the pose of HMD.
The combination of PATERIYA further teaches following limitations:
display, through the XR headset, a graphical representation of the tracking volume and the sub-volume of the set of tracking cameras from a perspective of the XR headset such that the graphical representation of the tracking volume and the sub-volume projected from the display unit is reflected by the display screen and is superimposed on the real-world scene passing through the display screen (Otto teaches generating a tracing volume as shown in Fig 2. Lohr also discloses “In some instances, the location and/or pose of the user within the real-world environment may be utilized to present warnings, indications, or content to the user. For example, if the user is approaching a wall of the real-world environment, knowing the location of the user and the wall (via the tracking system), the HMD may display images representing the wall within the real-world environment” in [0026]; “Moreover, images or data obtained from the tracking system may be used to generate a 3D model (or mesh) of the real-world environment” in [0027]. PATERIYA further discloses “A volume of interest could then for example be a volume comprising a real world object or a virtual object fixed to world space” in [0015]; “The present disclosure is at least partly based on the realization that a virtual object can be added in an extended reality view of a user based on a gaze point in world space of a user. In more detail, the virtual object is added if the user is gazing at a gaze point in world space consistent with a volume of interest of defined one or more one volumes of interest in world space” in [0018]; “In further embodiments, the virtual object is added to the extended reality view in a position fixed in world space in relation to the volume of interest of the defined one or more one volumes of interest. For example, the virtual object may be fixed in world space within or close to the volume of interest, or such that an association to the volume of interest is indicated” in [0029]; see also [0048].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto and Lohr with the teaching of PATERIYA so as to add a virtual object in relation to a volume of interest in world space with different methods (PATERIYA, [0048]).
As to Claim 7, Otto in view of Lohr and PATERIYA teaches the camera tracking system of Claim 1, further configured to:
obtain tracking accuracy information characterizing accuracies at which the set of tracking cameras will track an object in at least a first location and a second location, the first and second locations being spaces apart from each other within the tracking volume, wherein a graphical representation of the tracking volume is further generated to visually indicate the characterized accuracies at which the set of tracking cameras will track the object (Otto, Fig 2).
As to Claim 13, Otto in view of Lohr and PATERIYA teaches the camera tracking system of Claim 1, wherein the set of tracking cameras are separate and spaced apart from the XR headset while the camera tracking system is receiving the tracking information from the set of tracking cameras (Otto, Fig 2), and wherein the camera tracking system is further configured to:
obtain an XR headset tracking model defining an XR headset tracking volume of another set of tracking cameras which are attached to the XR headset; generate a graphical representation of the XR headset tracking volume from the perspective of the XR headset based on the XR headset tracking model, and provide the graphical representation of the XR headset tracking volume to the XR headset for display to the user (Otto teaches a headset 132 and tracking volume from cameras 16 in Fig 2. Lohr further discloses “At 604, the process 600 may generate a depth map and/or a 3D mesh based at least in part on the first image data. For example, the remote computing resources 142 may generate a depth map and/or the 3D mesh based at least in part on the first image data being received from the HMD 104 (i.e., using stereo camera imaging). As the first image data represents a portion of the environment 102, the depth map and/or the 3D mesh may also correspond to a depth map and/or a 3D mesh of the portion of the environment 102” in [0073]; “generating the third image data depicting the point-of-view of the user 100” in [0078]; “the remote computing resources 142 may transmit the third image data to the HMD 104, where the third image data represents the first image data as projected onto the portion of the 3D model” in [0079].)
As to Claim 16, Otto in view of Lohr and PATERIYA teaches the camera tracking system of Claim 1, further configured to:
obtain other tracking accuracy information characterizing accuracies at which the another set of tracking cameras attached to the XR headset will track an object in at least a first location and a second location, the first and second locations being spaces apart from each other within the XR headset tracking volume, wherein a graphical representation of the XR headset tracking volume is further generated to visually indicate for at least some of the spaced apart locations within the XR headset tracking volume the corresponding characterized accuracies at which the another set of tracking cameras attached to the XR headset will track the object (Otto, Fig 2 & 5-6).
Claim 17 recites similar limitations as claim 1, and further recites wherein the set of tracking cameras are separate and spaced apart from the XR headset while the camera tracking system is receiving the tracking information from the set of tracking cameras, and wherein the camera tracking system is further configured to: obtain an XR headset tracking model defining an XR headset tracking volume of another set of tracking cameras which are attached to the XR headset; generate a graphical representation of the XR headset tracking volume from the perspective of the XR headset based on the XR headset tracking model, and provide the graphical representation of the XR headset tracking volume to the XR headset for display to the user. Otto discloses “Additionally, the camera 16 is configured to determine a pose (i.e. a position and orientation) of one or more objects and/or trackers before and during a surgical procedure to detect movement of the object(s)” in [0032]; “ The GUI 44 may produce 2D, 3D, or a mixture of 2D/3D image representations of any of the aforementioned field of view 20, zones 32, 38, 110, objects or trackers 12, 14, 100…The GUI 44 and corresponding image representations can be shown on the display(s) 118 or on a digital transparent lens 134 of a head-mounted device 132 worn by the user setting up the camera 16.” in [0074]; “In one example, the GUI 44 can produce image representations of the field of view 20 from a first view 20a and a second view 20b. As shown in FIG. 5, the first view 20a is a front or point-of view and the second view 20b is a top view of the three-dimensional field of view 20” in [0077].
Claim 19 is rejected based upon similar rationale as Claim 7.
Claim 20 recites similar limitations as claim 17 but in a computer program product form. Therefore, the same rationale used for claim 17 is applied.
Claims 2, 14-15 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Otto in view of Lohr and PATERIYA, further in view of Bognar et al. (US 2018/0182164).
As to Claim 2, Otto in view of Lohr and PATERIYA teaches the camera tracking system of Claim 1, wherein the system is configured to render a graphical representation of the tracked volume as lines forming a wireframe surface extending between the locations of the edge boundaries of the tracking volume from the perspective of the XR headset (Otto discloses “the field of view 20 and/or zones 32, 38, 110 can be represented by virtual models (e.g., geometrical outlines)” in [0074]. Lohr also discloses rendering graphical representation based on perspective of HMD in [0070. 0078]. Bognar further teaches “Sensor System 100 optionally includes Registration Logic 140, configured to generate a wireframe representation surfaces within an environment” in [0033].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto, Lohr and PATERIYA with the teaching of Bognar so as to generate a 3D wireframe defining edges and surfaces of a structure within a 3D volume without texture data to reduce the amount of computation on 3D rendering (Bognar, [0004, 0018]).
As to Claim 14, Otto in view of Lohr and PATERIYA teaches the camera tracking system of Claim 13, further configured to:
generate a graphical representation of the tracking volume of the set of tracking cameras based on determining locations of edge boundaries of the tracking volume from the perspective of the XR headset, and render the graphical representation of the tracking volume as lines forming a wireframe surface extending between the locations of the edge boundaries of the tracking volume from the perspective of the XR headset; and generate the graphical representation of the XR headset tracking volume based on determining locations of edge boundaries of the XR headset tracking volume from the perspective of the XR headset, and render the graphical representation of the XR headset tracking volume as lines forming another wireframe surface extending between the locations of the edge boundaries of the XR headset tracking volume from the perspective of the XR headset (Otto discloses “For example, the field of view 20 and/or zones 32, 38, 110 can be represented by virtual models (e.g., geometrical outlines) mixed with a real time image of the surgical field and objects or trackers 12, 14, 100 as captured by the video camera 130….The GUI 44 and corresponding image representations can be shown on the display(s) 118 or on a digital transparent lens 134 of a head-mounted device 132 worn by the user setting up the camera 16” in [0074]; “the GUI 44 can produce image representations of the field of view 20 from a first view 20a and a second view 20b… the two views 20a, 20b may be any combination of the top, bottom, front, back, right, left and the like view of the three-dimensional field of view 20 relative to the camera 16” in [0077]; see also Fig 2. Lohr further discloses generating a tracking volume of the set of tracking cameras in [0070] and a tracking volume of the set of HMD cameras in [0073]; rendering graphical representation based on perspective of HMD in [0070. 0078]. Bognar further teaches “Sensor System 100 optionally includes Registration Logic 140, configured to generate a wireframe representation surfaces within an environment” in [0033].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto, Lohr and PATERIYA with the teaching of Bognar so as to generate a 3D wireframe defining edges and surfaces of a structure within a 3D volume without texture data to reduce the amount of computation on 3D rendering (Bognar, [0004, 0018]).
As to Claim 15, Otto in view of Lohr, PATERIYA and Bognar teaches the camera tracking system of Claim 14, wherein the graphical representation of the tracking volume of the set of tracking cameras is generated to be visually distinguishable based on difference in color, darkness, shading, flashing, animated movement from the graphical representation of the XR headset tracking volume (Otto discloses “the image representations of the zones 32, 38 and positions 22, 24 of objects or trackers 12, 14 may be any suitable shape, size, and/or color. Additionally, corresponding zones and objects or trackers may be different shape, size, and/or color from each other” in [0082]. Official notice has been taken of the fact that “the attributes of the image representation can be used to be distinguishable each other, such as change in darkness, shading, flashing or animated movement”, which is well-known in the art (see MPEP 2144.03), such as Lang’s [1221-1232].)
Claim 18 is rejected based upon similar rationale as Claim 2.
Claims 3-4 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Otto in view of Lohr and PATERIYA, further in view of Bognar and Tawada (US 2020/0314581).
As to Claim 3, Otto in view of Lohr, PATERIYA and Bognar teaches the camera tracking system of Claim 2. The combination of Tawada further teaches wherein the system controls a width of the lines forming the wireframe surface based on proximity of the pose of the track object to at least one of the edge boundaries of the tracking volume (Otto discloses “Additionally, the camera 16 is configured to determine a pose (i.e. a position and orientation) of one or more objects and/or trackers before and during a surgical procedure to detect movement of the object(s)” in [0032]; “In the configuration mentioned above, as shown in FIG. 2, the first range 36 and the second range 42 are each defined as a three-dimensional range of acceptable positions (e.g., a sphere). As shown throughout the Figures, the first zone 32 and second zone 38 are spherical in shape. The zones 32, 38 may be any suitable three-dimensional shape/volume. Other examples of 3D ranges include cones, cubes, hemi-spheres, or the like” in [0062]. Tawada further discloses “In this case, if the changed sound source position 383 is close to the boundary of the setting range 333, at least one of the sound source position 383, the setting range 333, and the cross arrow 393 may be highlighted by changing a display color or a line width of at least one of the sound source position 383, the setting range 333, and the cross arrow 393, or by blinking the display thereof” in [0055]; “In this case, if the changed sound source radius is close to the boundary of the control range 351, at least one of the sound source radius, the control range 351, and the bidirectional arrow 391 may be highlighted by changing a display color or a line width of at least one of the sound source radius, the control range 351, and the bidirectional arrow 391, or by blinking the display thereof” in [0056].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto, Lohr, PATERIYA and Bognar with the teaching of Tawada so as to update the display appearance associated with the line width to indicate the position of the tracked object close to an acceptable boundary (Tawada, [0055]).
As to Claim 4, Otto in view of Lohr, PATERIYA and Bognar teaches the camera tracking system of Claim 2. The combination of Tawada further teaches configured to control width of the lines forming the wireframe surface by increasing width of segments of the lines forming a region of the wireframe surface responsive to a determination that a distance between the pose of the tracked object to the region of the wireframe surface satisfies a proximity notification rule (Otto discloses “In the configuration mentioned above, as shown in FIG. 2, the first range 36 and the second range 42 are each defined as a three-dimensional range of acceptable positions (e.g., a sphere). As shown throughout the Figures, the first zone 32 and second zone 38 are spherical in shape. The zones 32, 38 may be any suitable three-dimensional shape/volume. Other examples of 3D ranges include cones, cubes, hemi-spheres, or the like” in [0062]. Bognar further teaches “Sensor System 100 optionally includes Registration Logic 140, configured to generate a wireframe representation surfaces within an environment” in [0033]. Tawada further discloses “In this case, if the changed sound source position 383 is close to the boundary of the setting range 333, at least one of the sound source position 383, the setting range 333, and the cross arrow 393 may be highlighted by changing a display color or a line width of at least one of the sound source position 383, the setting range 333, and the cross arrow 393, or by blinking the display thereof” in [0055]; “In this case, if the changed sound source radius is close to the boundary of the control range 351, at least one of the sound source radius, the control range 351, and the bidirectional arrow 391 may be highlighted by changing a display color or a line width of at least one of the sound source radius, the control range 351, and the bidirectional arrow 391, or by blinking the display thereof” in [0056].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto, Lohr, PATERIYA and Bognar with the teaching of Tawada is to change the display appearance associated with the line width to indicate the position of the tracked object close to an acceptable boundary (Tawada, [0055]).
As to Claim 8, Otto in view of Lohr and PATERIYA teaches the camera tracking system of Claim 7. The combination of Bognar and Tawada further teaches render the graphical representation of the tracking volume as lines forming a wireframe surface extending between the locations of the edge boundaries of the tracking volume from the perspective of the XR headset, wherein width of the lines forming the wireframe surface are varied to visually indicate for at least some of the spaced part locations within the tracking volume corresponding variation in the characterized accuracies at which the set of tracking cameras will track the object (Otto discloses “In the configuration mentioned above, as shown in FIG. 2, the first range 36 and the second range 42 are each defined as a three-dimensional range of acceptable positions (e.g., a sphere). As shown throughout the Figures, the first zone 32 and second zone 38 are spherical in shape. The zones 32, 38 may be any suitable three-dimensional shape/volume. Other examples of 3D ranges include cones, cubes, hemi-spheres, or the like” in [0062]; acceptable distance range in [0128-0129]. Bognar further teaches “Sensor System 100 optionally includes Registration Logic 140, configured to generate a wireframe representation surfaces within an environment” in [0033]. Tawada further discloses “In this case, if the changed sound source position 383 is close to the boundary of the setting range 333, at least one of the sound source position 383, the setting range 333, and the cross arrow 393 may be highlighted by changing a display color or a line width of at least one of the sound source position 383, the setting range 333, and the cross arrow 393, or by blinking the display thereof” in [0055]; “In this case, if the changed sound source radius is close to the boundary of the control range 351, at least one of the sound source radius, the control range 351, and the bidirectional arrow 391 may be highlighted by changing a display color or a line width of at least one of the sound source radius, the control range 351, and the bidirectional arrow 391, or by blinking the display thereof” in [0056].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto, Lohr and PATERIYA with the teaching of Bognar and Tawada so as to change the display appearance associated with the line width to indicate the position of the tracked object close to an acceptable boundary.
Claims 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Otto in view of Lohr, PATERIYA and Bognar, further in view of Lang (US 2019/0110842 A1 from case 347).
As to Claim 5, Otto in view of Lohr, PATERIYA and Bognar teaches the camera tracking system of Claim 2, wherein the tracking information from the set of tracking cameras further indicates pose of a tracked object relative to the set of tracking cameras, and the camera tracking system is further configured to: determine the pose of the tracked object based on the tracking information; and control opacity of the lines forming the wireframe surface when displayed on a see through display screen of the XR headset, based on proximity of the pose of the tracked object to at least one of the edge boundaries of the tracking volume (Otto discloses “In the configuration mentioned above, as shown in FIG. 2, the first range 36 and the second range 42 are each defined as a three-dimensional range of acceptable positions (e.g., a sphere). As shown throughout the Figures, the first zone 32 and second zone 38 are spherical in shape. The zones 32, 38 may be any suitable three-dimensional shape/volume. Other examples of 3D ranges include cones, cubes, hemi-spheres, or the like” in [0062]. Bognar further teaches “Sensor System 100 optionally includes Registration Logic 140, configured to generate a wireframe representation surfaces within an environment” in [0033]. Lang further discloses “Any combination of transparency or opacity of virtual data and live data is possible” in [0206]; “…can be displayed by the OHMD and/or the display unit of the arthroscopy system using different patterns and colors, e.g. solid lines, broken lines, dotted lines, different colors, e.g. green, red, blue, orange, different thickness, different opacity or transparency” in [1680]; using various highlighting techniques known in the art, including but not limited to : different color, grey scale, pattern display, line type etc. in [1221-1232]. Here, Lang’s highlighting can include the control of opacity of the lines forming the wireframe surface.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto, Lohr, PATERIYA and Bognar with the teaching of Lang so as to provide a visual warn by adjusting opacity of virtual data to enhance the highlighting.
As to Claim 6, Otto in view of Lohr, PATERIYA, Bognar and Lang teaches the camera tracking system of Claim 5, further configured to control opacity of the lines forming the wireframe surface by increasing opacity of segments of the lines forming a region of the wireframe surface responsive to a determination that a distance between the pose of the tracked object to the region of the wireframe surface satisfies a proximity notification rule (Otto discloses “In the configuration mentioned above, as shown in FIG. 2, the first range 36 and the second range 42 are each defined as a three-dimensional range of acceptable positions (e.g., a sphere). As shown throughout the Figures, the first zone 32 and second zone 38 are spherical in shape. The zones 32, 38 may be any suitable three-dimensional shape/volume. Other examples of 3D ranges include cones, cubes, hemi-spheres, or the like” in [0062]; acceptable distance range in [0128-0129]. Bognar also teaches “Sensor System 100 optionally includes Registration Logic 140, configured to generate a wireframe representation surfaces within an environment” in [0033]. Lang further discloses “If the surgeon places the pedicle screw or any related surgical instruments for the placement of the pedicle screw too close to the safe zone or within the safe zone, the area can be highlighted or another visual or acoustic alert can be triggered by the software” in [1550]; “Any combination of transparency or opacity of virtual data and live data is possible” in [0206]; “…can be displayed by the OHMD and/or the display unit of the arthroscopy system using different patterns and colors, e.g. solid lines, broken lines, dotted lines, different colors, e.g. green, red, blue, orange, different thickness, different opacity or transparency” in [1680]; using various highlighting techniques known in the art, including but not limited to : different color, grey scale, pattern display, line type etc. in [1221-1232]. Here, Lang’s highlighting can include the control of opacity of the lines forming the wireframe surface.)
Claims 9 and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Otto in view of Lohr, PATERIYA and Lang (US 2019/0110842 A1).
As to Claim 9, Otto in view of Lohr and PATERIYA teaches the camera tracking system of Claim 7, wherein the generation of the graphical representation of the tracking volume to visually indicate for the spaced apart locations the characterized accuracies at which the set of tracking cameras will track the object, comprises varying contrast of the graphical representation of the tracking volume to visually indicate for at least some of the spaced part locations within the tracking volume corresponding variation in the characterized accuracies at which the set of tracking cameras will track the object (Otto discloses “In the configuration mentioned above, as shown in FIG. 2, the first range 36 and the second range 42 are each defined as a three-dimensional range of acceptable positions (e.g., a sphere). As shown throughout the Figures, the first zone 32 and second zone 38 are spherical in shape. The zones 32, 38 may be any suitable three-dimensional shape/volume. Other examples of 3D ranges include cones, cubes, hemi-spheres, or the like” in [0062]; acceptable distance range in [0128-0129]. Lang further discloses “If the surgeon places the pedicle screw or any related surgical instruments for the placement of the pedicle screw too close to the safe zone or within the safe zone, the area can be highlighted or another visual or acoustic alert can be triggered by the software” in [1550]; “Any combination of transparency or opacity of virtual data and live data is possible” in [0206]; “Different changes in color, brightness, intensity, and/or contrast can be applied to different virtual data” in [1466]; “…can be displayed by the OHMD and/or the display unit of the arthroscopy system using different patterns and colors, e.g. solid lines, broken lines, dotted lines, different colors, e.g. green, red, blue, orange, different thickness, different opacity or transparency” in [1680]; using various highlighting techniques known in the art, including but not limited to : different color, grey scale, pattern display, line type etc. in [1221-1232]. Here, Lang’s highlighting can include the control of contrast of the graphical representation.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Otto, Lohr and PATERIYA with the teaching of Lang so as to provide a visual warn by adjusting contrast of virtual data to enhance the highlighting.
As to Claim 11, Otto in view of Lohr, PATERIYA and Lang teaches the camera tracking system of Claim 1, further configured to generate the graphical representation of the sub-volume of the tracking volume to be visually distinguishable from a graphical representation of the tracking volume while the graphical representations of the sub-volume of the tracking volume and the tracking volume are concurrently provided to the XR headset for display to the user (Otto, Fig 2 & 5-6).
As to Claim 12, Otto in view of Lohr, PATERIYA and Lang teaches the camera tracking system of Claim 11, wherein the graphical representation of the sub-volume of the tracking volume is generated to be visually distinguishable based on difference in color, darkness, shading, flashing, animated movement from the graphical representation of the tracking volume (Otto discloses “The machine vision system may utilize any suitable image processing algorithm for determining such distance information, including, but not limited to utilizing a depth map generation, segmentation, edge detection, color analysis, blob detection, pixel analysis, pattern recognition, or the like” in [0037]; “Hence, the image representations of the zones 32, 38 and positions 22, 24 of objects or trackers 12, 14 may be any suitable shape, size, and/or color. Additionally, corresponding zones and objects or trackers may be different shape, size, and/or color from each other. For example, the first zone 32 may be represented as a square outline and the position 22 of the first object or tracker 12 may be represented as a solid circle. As used herein, the term "outline" is generally defined as a perimeter or boundary line of the zone as displayed on the GUI 44” in [0082]. Official notice has been taken of the fact that the cited any suitable shape, size and/or color may include well-known brightness, opacity, animation etc., which is well-known in the art (see MPEP 2144.03), such as Lang’s [1221-1232].)
Conclusion
THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIMING HE whose telephone number is (571)270-1221. The examiner can normally be reached Monday-Friday, 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached on 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Weiming He/
Primary Examiner, Art Unit 2611