DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 21 is rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception in the form of an abstract idea, specifically a mental process, without significantly more. The claim(s) recite(s) “distinguishing an area…setting the boundaries…arranging the contours…” The limitations, under broadest reasonable interpretation, cover performance of the limitation in the mind, but for the recitation of generic computer components, and/or read on analyzing an image by visual inspection by a user. In this case, “distinguishing an area…setting the boundaries…arranging the contours” can be practically performed in the mind by a user/physician viewing an image, through visual inspection. If a claim limitation under its broadest reasonable interpretation covers performance of the limitation in the mind but for the recitation of generic computer components (i.e. a processor), then it falls within the “mental processes” grouping of abstract ideas.
Following step 2A of the two-prong analysis, these judicial exceptions are not integrated into a practical application because the claim merely provides instructions to implement an abstract idea and makes no mention of whether a generic computer (i.e. “using a computer processor”) is used to do so (See MPEP 2106.05(f)). Furthermore, the claims as written do not include elements to 1) improve the functioning of a computer (See MPEP 2105.05(a)); 2) effect a particular treatment or prophylaxis (See MPEP 2106.04(d)(2)); 3) use a particular machine (See MPEP 2106.05(b)); or 4) use the judicial exceptions in a meaningful way beyond generally linking the use to a particular technological environment (See MPEP 2106.05(h)).
Following step 2B of the two-prong analysis, the additional element(s) (i.e. generating a 3D lung model…based on…vertices) do not amount to significantly more than the judicial exception the computer is simply the tool used to perform the abstract idea of “distinguishing an area…setting the boundaries…arranging the contours”. See MPEP 2106.05(f)).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-13, 18, and 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 1 includes the language “method in which at least a portion of each step is performed by a processor”. It is not clear what is meant by “a portion of each step” only being performed by the processor. Under broadest reasonable interpretation, the method claim would require a processor to execute all of the claimed steps. The language “a portion” is considered indefinite and it is suggested the claim language be modified to further clarify this language. The dependent claims do not provide further clarity and therefore stand rejected under 112(b).
Claim 5 (dependent on claim 1) includes the term “the same preset ratio”. There is insufficient antecedent basis for this limitation in the claim.
Claim 18 and 19 (dependent on claim 17) include the term “the same preset ratio” and “different ratios”, respectively. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6, 8-12, 14-16, 20, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sartor et al. (2020/0030033).
With respect to claims 1 and 14, Sartor et al teach of a thoracoscopy stimulation apparatus and method comprising generating a three-dimensional (3D) lung model where the 3D lung model [0051, 0052] is based on a chest CT image of a patient with a CT device [0040, 0047]. Sartor et al. teach of changing the lung model to generate a 3D atelectasis model or the collapsed lung model [0043]. Sartor et al. teach of generating a 3D thorax model using the 3D atelectasis model or a volumetric data of the patient’s collapsed lung which is utilized to closely match the collapsed lung model to the real-time view of the collapsed lung [0043]. Sartor et al. therefore teach of positioning the 3D thorax model in a virtual space and generating a simulation image based on the 3D thorax model and tracked locations of a thoracoscope and a surgical instrument or tool 80 [0043, fig. 11, 12]. Sartor et al. therefore teach of guiding a surgical tool 80 within the thoracic cavity and lungs with a tracking system 110 [0046].
With respect to claim 21, Sartor et al. teach of calculating a contrast difference within each window of a pixel size constituting a chest CT image where the 3D model is segmented to define the boundaries of various types of tissue and group together similar types of tissue such as low contrast density details of the lung and high contrast density details of the lung [0040], distinguishing an area with relatively low intensity as a lung by connecting boundaries of locations where a calculated value of contrast difference or the contrast density is equal to or greater than a preset reference or thresholds [0041, 0042, 0049], setting the boundaries as contours and extracting the contours from the chest CT image taken in plurality in three direction or applying segmentation to the 3-D reconstruction to define the boundaries of various types of tissues, determination of optimum threshold that separates tissue and background [0049, 0050], arranging the boundaries of resolutions and distances between time points of the CT images [0054-0059]. Sartor et al. teach of generating a 3D lung model based on spatial vertices defining the contours of the mesh model where the resulting meshed 3D model forms a computational lung model [0061, 0063, 0069].
With respect to claims 2, 3, 15, and 16, Sartor et al teach of the 3D collapsed lung model to include a direction of gravity based on the patient’s posture and generating the model by moving locations of some vertices included in the lung model based on the direction of gravity or the directional effect of gravity on the model based upon the angle of the operating table [0056, 0057, 0066]. Sartor et al. therefore teach of loading a plurality of vertices included in the lung model or the mesh [0041, 0042, 0057-0059, fig. 6, 8, 11] generating a ground for movement limit of the vertices based on the direction of gravity [0056], calculating distances from the ground to the vertices, moving the locations of the vertices in the direction of gravity based on the distances, and generating the collapsed lung model including the vertices or the mesh size requirements whose locations have been moved and performing surface rendering [0058, 0059, 0062, 0063, 0068-0070].
With respect to claim 6, Sartor et al. teach of generating the 3D thorax model based on the locations of the ribs included in the chest CT image [0055].
With respect to claims 8-10, and 20, Sartor et al. teach of displaying a location of the pulmonary nodule on the 3d collapsed lung model [0049] and displaying a safe margin which indicates a removal range around the nodule or area of interest to be distinguished from surrounding tissues [0046, 0048, 0064]. Sartor et al. teach of displaying an interface or the monitoring equipment 30 for changing a size or display of the safe margin and changing the size or display of the safe margin based on an input through the interface or providing a real time view of the patient’s collapsed lung [0072, 0078, 0080]. Sartor et al. also teach of receiving an input of a degree of change of the collapsed lung model and changing the model based on the degree of change of the 3D collapsed lung model and changing the model based on the degree of change such as the degree of tilt of the operating table is also taken into account when applying the directional effect of gravity [0041, 0052, 0056]. Sartor et al. teach of a memory storing code to cause the processor to display the location of the nodule in the collapsed lung model [0046-0049].
With respect to claim 11, Sartor et al. teach of simultaneously displaying a thoracoscopic image and the simulation image on different screens or user interface 20 and monitoring equipment 30 [0046] or where the display real-time video images of the patient’s collapsed lung within the thoracic cavity of the patient wherein the collapsed lung model is superimposed over the displayed real-time video images of the patient’s collapsed lung [0024, 0043, 0065, 0066, 0069, 0071, 0072].
With respect to claim 12, Sartor et al teach of comparing ratios of lung parts in the thoracoscopic image and the simulation image and changing the collapsed lung model based on the ratios of the lung parts [0054].
Sartor et al. do not teach of all the claimed elements in a single embodiment. It would have therefore been obvious to one of ordinary skill in the art to combine the elements from the different embodiments for more accurate modeling of the collapsed lung using CT data and reducing the probability of errors while advancing surgical instrument to obtain positional data of the collapsed lung and accurately depict the patient’s lung volume within the thoracic cavity [0040].
Claim(s) 4, 5, 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sartor et al. in view of Shekhar et al. (2010/0027861). Sartor et al. teach of the 3D collapsed lung model to include a direction of gravity based on the patient’s posture and generating the model by moving locations of some vertices included in the lung model based on the direction of gravity or the directional effect of gravity on the model based upon the angle of the operating table [0056, 0057, 0066]. Sartor et al. therefore teach of loading a plurality of vertices included in the lung model or the mesh [0041, 0042, 0057-0059, fig. 6, 8, 11] generating a ground for movement limit of the vertices based on the direction of gravity [0056], calculating distances from the ground to the vertices, moving the locations of the vertices in the direction of gravity based on the distances, and generating the collapsed lung model including the vertices or the mesh size requirements whose locations have been moved and performing surface rendering [0058, 0059, 0062, 0063, 0068-0070].
Sartor et al. do not teach of moving the vertex farther from the ground by a longer distance toward the ground by moving the distances for the respective vertices at the same preset ratio. In a related field of endeavor Shekhar et al teach of moving the vertices by only a user-defined threshold distance or when a predefined maximum number of iterations is reached [0066] to ensure that the adjusted mesh satisfies the end conditions [0126]. Shekhar et al. teach of at each iteration of refinement, the vertices move incrementally toward the edge and the process is repeated until an end condition is achieved which is satisfied when none of the vertices move by more than user-defined threshold distance or when a predefined maximum number of iterations is reached [0066, 0098, 0108]. Under broadest reasonable interpretation, Shekhar et al teach of moving the vertex farther from the ground by the longer distance toward the ground by moving the distances for the respective vertices at the same preset ratio. It would have therefore been obvious to one of ordinary skill in the art to use the teaching by Shekhar et al. to modify Sartor et al. to more effectively and accurately segment out the tumor to treat effectively without damage to healthy surrounding tissue [Shekhar, 0010].
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sartor et al. in view of Shekhar et al. and further in view of Cao et al. (2023/0073340). The combination of the Sartor and Shekhar references teach of forming the plurality of vertices in the 3D lung model and thorax but do not explicitly teach of forming organ vertices that includes the stomach. In a related field of endeavor Cao et al. teach of a fully-connected vertex reconstruction network including vertices of the three-dimensional body mesh defining key points such as key points on the surfaces of the back and stomach [0060, 0061]. Shekhar et al. teach of at each iteration of refinement, the vertices move incrementally toward the edge and the process is repeated until an end condition is achieved which is satisfied when none of the vertices move by more than user-defined threshold distance or when a predefined maximum number of iterations is reached [0066, 0098, 0108]. Sartor et al. teach of the 3D collapsed lung model to include a direction of gravity based on the patient’s posture and generating the model by moving locations of some vertices included in the lung model based on the direction of gravity or the directional effect of gravity on the model based upon the angle of the operating table [0056, 0057, 0066]. Sartor et al. therefore teach of loading a plurality of vertices included in the lung model or the mesh [0041, 0042, 0057-0059, fig. 6, 8, 11] generating a ground for movement limit of the vertices based on the direction of gravity [0056], calculating distances from the ground to the vertices, moving the locations of the vertices in the direction of gravity based on the distances, and generating the collapsed lung model including the vertices or the mesh size requirements whose locations have been moved and performing surface rendering [0058, 0059, 0062, 0063, 0068-0070]. Therefore, under broadest reasonable interpretation, the combination of references teaches of forming organ vertices including the stomach, thorax (as taught by Cao), and 3D lung model, calculating in real time distances between moving vertices of the lung model and the organ vertices based on the direction of gravity, and when the distances are within a preset certain distance, switching a moving direction of the vertices of the lung model to the organ vertices around a lung model located at a relatively low position (as taught by Sartor and Shekhar). It would have therefore been obvious to one of ordinary skill in the art to use the teaching by Cao et al. to modify the previous teachings to construct a three-dimensional human body model corresponding to the body region based on target connection relationship between mesh vertices to ensure that the human body model is constructed efficiently and accurately and further help in diagnostics [Cao, 0030].
Claim(s) 13 is is/are rejected under 35 U.S.C. 103 as being unpatentable over Sartor et al. in view of Friedlander et al. (2019/0357751). Sartor et al. do not explicitly teach of a wearable device worn by an operator that includes the simulation image for the surgical instrument. In a related field of endeavor Friedlander et al. teach of transmitting image to a display that can be worn by the operator for control of endoscope to perform surgical operations [0100]. Friedlander et al. also teach of using the system and method to improve lung function [0196-0198] where the surgical instrument may be visualized using thoracoscopy [0207, 0240, 0241]. It would have therefore been obvious to one of ordinary skill in the art to use the teaching by Friedlander et al. to modify Sartor et al. to facilitate better visualization within the cavity of the subject [Friedlander, 0100].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BAISAKHI ROY whose telephone number is (571)272-7139. The examiner can normally be reached Monday-Friday 7-3 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached at 571-272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BR
/BAISAKHI ROY/Primary Examiner, Art Unit 3797