DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4, 8-9, 11-12 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Tsukagoshi et al. (US 2012/0313933 A1).
In regards to claim 1, Tsukagoshi teaches an image processing apparatus comprising:
a processor (e.g. Fig.10: control unit 145), wherein the processor is configured to:
acquire a plurality of three-dimensional coordinates for specifying positions of a plurality of pixels included in a three-dimensional image showing a target object in a real space, generated based on a plurality of two-dimensional images obtained by imaging the target object from a plurality of imaging positions in the real space at a plurality of viewpoints corresponding to the respective imaging positions, and a plurality of two-dimensional coordinates for specifying positions corresponding to the plurality of pixels in a screen on which the three-dimensional image is rendered (e.g. [0033]: workstation 130 is an image processing apparatus that performs image processing on a medical image; specifically, the workstation 130 according to the first embodiment performs various types of rendering processing on volume data acquired from the image storage device 120 to generate a parallax image group; the parallax image group is a plurality of parallax images captured from a plurality of points of view; for example, a parallax image group displayed on a monitor enabling an observer to view nine-parallax images stereoscopically with the naked eyes is nine parallax images whose viewpoints are different from one another; see also [0103]: acquisition unit 145a acquires rendering conditions used for generating a parallax image group that is parallax images of a predetermined parallax number from volume data that is three-dimensional medical image data; determination unit 145b sets corresponding information for causing a space coordinate of a stereoscopic image viewed stereoscopically by referring to the stereoscopic display monitor that displays the parallax image group (coordinates of the stereoscopic image space) to correspond to a space coordinate of a captured site in the volume data (coordinates in the real space) based on at least the parallactic angle between the parallax images constituting the parallax image group included in the rendering condition and the display size of the parallax image group displayed on the stereoscopic display monitor);
acquire unit length information indicating a relationship between a first unit length of a three-dimensional coordinate system defining the three-dimensional coordinates, and a second unit length of the real space (e.g. [0101]: configured … so as to display a gauge (a scale) for causing the image viewed stereoscopically by the observer on the monitor enabling stereoscopic vision to correspond to a real space; as above, [0103]: sets corresponding information for causing a space coordinate of a stereoscopic image viewed stereoscopically … (coordinates of the stereoscopic image space) to correspond to a space coordinate of a captured site in the volume data (coordinates in the real space); see also [0104]: based on the corresponding information, the determination unit 145b determines a scale for converting the length in the direction perpendicular to the display surface of the stereoscopic display monitor in the stereoscopic image space into the length in the real space; output unit 145c performs output control such that the scale is displayed on the stereoscopic display monitor in a manner superimposed on the stereoscopic image based on the parallax image group);
generate an object of which the second unit length is specifiable, based on the plurality of three-dimensional coordinates, the plurality of two-dimensional coordinates, and the unit length information (e.g. as above, [0101]: display a gauge (a scale) for causing the image viewed stereoscopically by the observer on the monitor enabling stereoscopic vision to correspond to a real space; [0104]: output unit 145c performs output control such that the scale is displayed on the stereoscopic display monitor in a manner superimposed on the stereoscopic image based on the parallax image group); and
output a first image in which the object and the three-dimensional image are shown in a comparable manner (e.g. as above, [0101]: display a gauge (a scale) for causing the image viewed stereoscopically by the observer on the monitor enabling stereoscopic vision to correspond to a real space; [0104]: output unit 145c performs output control such that the scale is displayed on the stereoscopic display monitor in a manner superimposed on the stereoscopic image based on the parallax image group).
In regards to method claim 11 and medium claim 12, claim(s) 11-12 recite(s) limitations that is/are similar in scope to the limitations recited in claim 1. Therefore, claim(s) 11-12 is/are subject to rejections under the same rationale as applied hereinabove for claim 1. To note, Tsukagochi discloses the use of a medium in paragraph [0184].
In regards to claim 2, Tsukagochi teaches an apparatus, wherein the three-dimensional image is an image generated based on a plurality of two-dimensional images obtained by imaging the target object from a plurality of imaging positions in the real space (e.g. as above, [0033]: workstation 130 according to the first embodiment performs various types of rendering processing on volume data acquired from the image storage device 120 to generate a parallax image group; the parallax image group is a plurality of parallax images captured from a plurality of points of view; [0103]: acquisition unit 145a acquires rendering conditions used for generating a parallax image group that is parallax images of a predetermined parallax number from volume data that is three-dimensional medical image data).
In regards to claim 3, Tsukagochi teaches an apparatus, wherein the unit length information is information generated based on a distance between imaging positions adjacent to each other among the plurality of imaging positions (e.g. as above, [0103]: sets corresponding information for causing a space coordinate of a stereoscopic image viewed stereoscopically … (coordinates of the stereoscopic image space) to correspond to a space coordinate of a captured site in the volume data (coordinates in the real space) based on at least the parallactic angle (angular distance) between the parallax images constituting the parallax image group included in the rendering condition and the display size of the parallax image group displayed on the stereoscopic display monitor; [0104]: based on the corresponding information, the determination unit 145b determines a scale for converting the length in the direction perpendicular to the display surface of the stereoscopic display monitor in the stereoscopic image space into the length in the real space).
In regards to claim 4, Tsukagochi teaches an apparatus, wherein the distance is a distance obtained by a positioning unit (e.g. as above, [0103]: sets corresponding information for causing a space coordinate of a stereoscopic image viewed stereoscopically … (coordinates of the stereoscopic image space) to correspond to a space coordinate of a captured site in the volume data (coordinates in the real space) based on at least the parallactic angle (angular distance) between the parallax images constituting the parallax image group included in the rendering condition and the display size of the parallax image group displayed on the stereoscopic display monitor; see also [0024]: a "parallactic angle" represents an angle defined by viewpoint positions adjacent to each other among viewpoint positions set for generating the "parallax image group" and a predetermined position in a space indicated by the volume data (e.g. the center of the space); Examiner’s note: this shows that positioning of captured parallax images would have been determined by positioning unit).
In regards to claim 8, Tsukagochi teaches an apparatus, wherein the processor is configured to:
change a first viewpoint for observing the three-dimensional image through the screen in response to a given first instruction (e.g. [0072]: volume rendering processing is performed by the three-dimensional virtual space rendering unit 1362k in accordance with rendering conditions; examples of the rendering conditions also include "parallel movement of the viewpoint position", "rotational movement of the viewpoint position", "enlargement of the parallax image group", and "reduction of the parallax image group"; such rendering conditions may be received from the operator via the input unit 131, or may be set by default; in both cases, the three-dimensional virtual space rendering unit 1362k receives the rendering conditions from the control unit 135, and performs the volume rendering processing on the volume data in accordance with the rendering conditions); and
change a second viewpoint for observing the object through the screen according to the first viewpoint (e.g. as above, [0072]: rendering conditions also include "parallel movement of the viewpoint position", "rotational movement of the viewpoint position"; such rendering conditions may be received from the operator via the input unit 131, or may be set by default; receives the rendering conditions from the control unit 135, and performs the volume rendering processing on the volume data).
In regards to claim 9, Tsukagochi teaches an apparatus, wherein the processor is configured to change a third viewpoint for observing the object through the screen in response to a given second instruction (e.g. as above, [0072]: rendering conditions also include "parallel movement of the viewpoint position", "rotational movement of the viewpoint position"; such rendering conditions may be received from the operator via the input unit 131, or may be set by default; receives the rendering conditions from the control unit 135, and performs the volume rendering processing on the volume data).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5, 7, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsukagochi as applied to claim 1 above, and further in view of Rolleston et al. (US 2012/0159321 A1).
In regards to claim 5, Tsukagochi teaches the apparatus of claim 1, but does not explicitly teach the apparatus, wherein the second unit length is a length related to a subject image included in at least one two-dimensional image among the plurality of two-dimensional images.
However, Rolleston teaches an apparatus, wherein the second unit length is a length related to a subject image included in at least one two-dimensional image (e.g. [0034]: dimensions of the object 206 and the rendering 202 (e.g. document, or packaging) may also be properties to determine a comparison or contrasting of sizes; other objects may be envisioned and selected among by the user or automatically selected; the objects indicating size may be a ruler or measuring stick having measuring marks, a pencil, a pair of hands, a person, an animal, a finger and/or any virtual three dimensional common object to contrast and convey the dimensions of the work product based on either a contrasting size or comparable size in relation to the object).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Tsukagochi to display size comparison, in the same conventional manner as taught by Rolleston as both deal with displaying objects and scale visualization. The motivation to combine the two would be that it would allow the user to visualize and understand the scale of an object displayed, such as by using a ruler or person for comparison.
In regards to claim 7, the combination of Tsukagochi and Rolleston also teaches an apparatus, wherein the object is an image including a figure and a numerical value indicating a length related to the figure (e.g. Rolleston as above, [0034]: dimensions of the object 206 and the rendering 202 (e.g. document, or packaging) may also be properties to determine a comparison or contrasting of sizes; the objects indicating size may be a ruler or measuring stick having measuring marks; Examiner’s note: it may be viewed that the ruler and/or measuring stick would have numerical values associated, as is known).
In addition, the same rationale/motivation of claim 5 is used for claim 7.
In regards to claim 10, the combination of Tsukagochi and Rolleston also teaches an apparatus, wherein the object includes an image showing a body existing in the real space (e.g. Rolleston as above, [0034]: dimensions of the object 206 and the rendering 202 (e.g. document, or packaging) may also be properties to determine a comparison or contrasting of sizes; the objects indicating size may be … a person, an animal, a finger).
In addition, the same rationale/motivation of claim 5 is used for claim 10.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsukagochi as applied to claim 1 above, and further in view of Bevis et al. (US 2016/0147408 A1).
In regards to claim 6, Tsukagochi teaches the apparatus of claim 1, but does not explicitly teach the apparatus, wherein the object is an image generated based on designated two-dimensional coordinates among the plurality of two-dimensional coordinates.
However, Bevis teaches an apparatus, wherein the object is an image generated based on designated two-dimensional coordinates among the plurality of two-dimensional coordinates (e.g. [0041],Fig.5: headset then at step 504 receives user input (e.g. one or more gestures, spoken commands and/or gaze-based commands) for specifying two or more points in space in the user's environment; at step 505 the headset determines the user-specified points by determining the most likely 3D coordinates of each user-specified point, based (at least in part) on a 3D mesh model; at step 506, the headset displays measurement tool to the user using the determined points as endpoints or vertices of the tool; see also [0031],Fig.3B: user provides input to the headset to specify two points 37, which in this example are the user's initial desired endpoints of the virtual measurement tool).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of Tsukagochi to display measurements, in the same conventional manner as taught by Bevis as both deal with displaying objects and size visualization. The motivation to combine the two would be that it would allow the user to not only measure, but also visualize measurement of the displayed object.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JED-JUSTIN IMPERIAL whose telephone number is (571)270-5807. The examiner can normally be reached Monday to Friday, 9am - 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JED-JUSTIN IMPERIAL/ Examiner, Art Unit 2616
/DANIEL F HAJNIK/ Supervisory Patent Examiner, Art Unit 2616