DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6-10, 12-13, and 22-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ichihashi (PGPUB: 20210290977 A1) in view of Yao (EP 3869460 A1), and further in view of SEAH (WO 2022006621 A1).
Regarding claims 1 and 22. A computer-implemented method for monitoring anatomic position of a human subject for a radiotherapy treatment session, the method comprising:
receiving position information corresponding to observed positions of a tracked anatomical area of a patient, the position information observed during the radiotherapy treatment session (see Fig. 2, paragraph 30, the position of the tumor region will be simply referred to as a “tumor position”. The tumor position may be defined by a set of coordinates of a plurality of pixels constituting the tumor region included in the MR image, or may be defined by coordinates of a specific pixel. The tumor position may be defined by a distance and direction from an anatomical reference point included in the MR image to a reference point of the rumor region);
providing the position information as an input to a trained model, wherein the trained model is trained with temporal sequences of observed anatomical positions from training data (see paragraph 69, the present embodiment is not limited to this. For example, the machine learning apparatus 4 may input an MR image and an evaluation result of a tumor position serving as truth data to a neural network as input data. The evaluation result of the tumor position is a result of accuracy evaluation on identification of the tumor position);
determining an estimated position of the tracked anatomical area of the patient at a future time, based on output of the trained model (see Fig. 4, paragraph 55, the trained model is a neural network trained to, in response to an input of an MR image of a present time phase, output a tumor position of a future time phase, which is a predetermined time phase after the present time phase; the tumor position of the future time phase may be output as a sequence of image coordinates of the pixels of the tumor region, as image data in which the tumor region is rendered, or as a sequence of real coordinates of the pixels of the tumor region); and
controlling the radiotherapy treatment session based on the estimated position of the tracked anatomical area of the patient (see Fig. 8, paragraph 75, After step SA2, through implementation of the radiotherapy control function 513, the processing circuitry 51 determines whether or not the tumor position of the future time phase (N+10) % is outside the irradiation range (step SA3). When it is determined that the tumor position of the future time phase (N+10) % is not outside the irradiation range (NO in step SA3), the processing circuitry 51 proceeds to step SA1, obtains an MR image of the next present time phase N %, and repeats steps SA1 to SA3; when it is determined that the tumor position of the future time phase (N+10) % is outside the irradiation range in step SA3 (YES in step SA3), the processing circuitry 51 immediately stops the irradiation through implementation of the radiotherapy control function 513 (step SA4)).
However, Ichihashi does not expressly teach receiving position information.
Yao teaches that An aspect of the present disclosure relates to systems and methods for ROI positioning. The systems may obtain an image (e.g., a tracker image obtained during each tracker scan) of an object (e.g., a patient) captured by an imaging device and extract image information (e.g., pixel values of pixels in the image) of the image. Further, the systems may obtain feature information (e.g., a size of the ROI, a shape of the ROI, an anatomical feature of the ROI, imaging parameter information associated with the ROI, pixel information associated with the ROI) of an ROI (e.g., a lesion) in the object. According to the image information of the image and the feature information of the ROI, the systems may determine position information of the ROI in the image using a positioning model (e.g., a machine learning model, a regression model) (see paragraph 28).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ichihashi by Yao for providing The systems may obtain an image (e.g., a tracker image obtained during each tracker scan) of an object (e.g., a patient) captured by an imaging device and extract image information (e.g., pixel values of pixels in the image) of the image and according to the image information of the image and the feature information of the ROI, the systems may determine position information of the ROI in the image using a positioning model, as receiving position information. Therefore, combining the elements from prior arts according to known methods and technique, determine position information of the ROI in the image, would yield predictable results.
The combination does not expressly teach wherein the trained generative model comprises a transformer deep learning neural network that uses self-attention to process data.
SEAH teaches that the method comprises the use of fast transformers with linear attention. Fast transformers are defined as adopting a linear transformer model which enables reduction of memory requirements and linear scaling with respect to the context length. In other words, the quality of generated text from a fast transformer is comparable to a conventional transformer and is significantly more efficient in terms of inference time and memory. Another benefit of the method comprising the use of a fast transformer is to reduce the 0(n.sup.2) computation complexity in standard key-value attention models used in a Generative Pre-trained Transformer (GPT)/standard selfattention mechanism to 0{n) in both time and space with respect to sequence length, n denoting the sequence length. Fast transformers change the attention from conventional softmax attention to a feature map based on dot product attention (see paragraph 107).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by SEAH to obtain the method comprising the use of a fast transformer is to reduce the 0(n.sup.2) computation complexity in standard key-value attention models used in a Generative Pre-trained Transformer (GPT)/standard selfattention mechanism to 0{n) in both time and space with respect to sequence length, n denoting the sequence length, in order to provide wherein the trained generative model comprises a transformer deep learning neural network that uses self-attention to process data. Therefore, combining the elements from prior arts according to known methods and technique would yield predictable results.
Regarding claim 2. The combination teaches the method of claim 1, wherein the position information is based on image data captured during the radiotherapy treatment session (see Ichihashi, paragraph 53, through implementation of the tumor position estimation function 512, the processing circuitry 51 of the irradiation control apparatus 5 inputs an MR image of a first time phase to the trained model and thereby estimates a position of a tumor region of the patient of a second time phase which is a predetermined time phase after the first time phase. The first time phase represents a time of acquiring or generating an MR image to be processed).
Regarding claim 3. The combination teaches the method of claim 2, the method further comprising:
generating the position information by extracting features from multiple images of the image data (see Yao, paragraph 32, the processing device 120 may obtain an image of an object captured by the imaging device 110 and extract image information of the image. Further, the processing device 120 may obtain feature information of an ROI in the object. According to the image information of the image and the feature information of the ROI, the processing device 120 may determine position information of the ROI in the image using a positioning model).
Regarding claim 6. The combination teaches the method of claim 1, wherein the estimated position is further determined based on monitoring signals captured during the radiotherapy treatment session from one or more sensors (see Ichihashi, paragraph 105, the respiratory waveform is an example of information for ascertaining body movement of a patient. Instead of the respiratory waveform, a numerical value or symbol indicating the present time phase may be input, or a numerical value or symbol indicating a respiratory level corresponding to the present time phase may be input).
Regarding claim 7. The combination teaches the method of claim 6, wherein the monitoring signals include a measurement of respiratory motion observed at a prior time of the radiotherapy treatment session (see Ichihashi, Fig. 9, paragraph 80, the respiratory waveform I12 of the patient may be displayed on the display screen I1. The respiratory waveform I12 may be measured in real time by the respiratory sensor or the like, and displayed).
Regarding claim 8. The combination teaches the method of claim 1, wherein the output of the trained generative model represents a prediction of respiratory motion to occur at the future time during the radiotherapy treatment session, and wherein the estimated position of the tracked anatomical area of the patient corresponds to the prediction of respiratory motion (see Ichihashi, Fig. 9, paragraph 80, the latest respiratory time phase is displayed in real time so that the respiratory waveform flows from left to right with time. A mark M11 representing the present time phase N % and a mark M12 representing the future time phase (N+10) % may be displayed on the respiratory waveform I12) (see SEAH, paragraph 107).
Regarding claim 9. The combination teaches the method of claim 8, wherein the observed positions of the tracked anatomical area of the patient are captured during multiple observed breathing cycles, and wherein the estimated position of the tracked anatomical area of the patient corresponds to multiple predicted breathing cycles (see Ichihashi, Fig. 7, paragraph 66, marks indicating time phases of MR image collection of respective types of imaging timing are shown on a respiratory waveform. As shown in FIG. 7, if one respiratory cycle is 3.6 seconds for example, only about ten training samples can be acquired by one type of imaging timing. To densely collect training samples in one respiratory cycle of a patient, the MRI integrated radiotherapy apparatus 6 performs dynamic imaging at different types of imaging timing with respect to one respiratory cycle).
Regarding claim 10. The combination teaches the method of claim 1, wherein the trained generative model is re-trained at a plurality of update intervals during the radiotherapy treatment session, based on the observed positions of the tracked anatomical area of a patient (see Ichihashi, Fig. 7, paragraph 69, the machine learning apparatus 4 inputs a difference (error) between the estimated tumor position and the truth tumor position to the untrained model and performs back propagation, thereby calculating a gradient vector. Next, the machine learning apparatus 4 updates parameters, such as a weight and a bias, of the untrained model based on the gradient vector. A trained model is completed by repeating the forward propagation and back propagation for a number of training samples and updating the parameters. The trained model is supplied to the irradiation control apparatus 5) (see SEAH, paragraph 107).
Regarding claim 12. The combination teaches the method of claim 1, wherein the observed positions of anatomy used to train the trained generative model are observed from the patient (see Ichihashi, Fig. 6, paragraph 62, the machine learning apparatus 4 generates a trained model by training an untrained model based on a plurality of training samples. The untrained model is a neural network in which parameters are set to initial values. Each training sample is constituted by a combination of input data and truth data. As the input data, an MR image of a first time phase (also referred to as an “input MR image”) is used. As the truth data, a tumor position of a second time phase (also referred to as a “truth tumor position”) is used) and multiple other human subjects (see Ichihashi, Fig. 6, paragraph 67, the training samples need not be all the same patient's, and may be various patients'. The time phase lag between input data and truth data is desirably the same for all training samples, but may vary there among as long as it is within a tolerable range) (see SEAH, paragraph 107) (see SEAH, paragraph 107).
Regarding claim 13. The combination teaches the method of claim 1, wherein the tracked anatomical area corresponds to at least one region of interest or at least one organ at risk defined for the radiotherapy treatment session (see Ichihashi, Fig. 8 , paragraph 71, the irradiation control shown in FIG. 8 is performed in parallel with radiotherapy by the MRI integrated radiotherapy apparatus 6. In radiotherapy, the patient P is placed on the top plate 84, and the patient P and top plate 84 are positioned so that the tumor region is positioned within an irradiation range set in the radiotherapy plan. The irradiation range of the patient P positioned at the aforementioned position is irradiated by the radiotherapy mechanism 8).
Regarding claim 23. The combination teaches the non-transitory computer-readable storage medium of claim 22, wherein the position information is based on image data captured during the radiotherapy treatment session (see Ichihashi, paragraph 53, through implementation of the tumor position estimation function 512, the processing circuitry 51 of the irradiation control apparatus 5 inputs an MR image of a first time phase to the trained model and thereby estimates a position of a tumor region of the patient of a second time phase which is a predetermined time phase after the first time phase. The first time phase represents a time of acquiring or generating an MR image to be processed), and
wherein the operations further comprise:
generating the position information by extracting features from multiple images of the image data (see Yao, paragraph 32, the processing device 120 may obtain an image of an object captured by the imaging device 110 and extract image information of the image. Further, the processing device 120 may obtain feature information of an ROI in the object. According to the image information of the image and the feature information of the ROI, the processing device 120 may determine position information of the ROI in the image using a positioning model).
Claim(s) 4-5, 14, and 24-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ichihashi (PGPUB: 20210290977 A1) in view of Yao (EP 3869460 A1), in view of SEAH (WO 2022006621 A1), and further in view of Lachaine (CN 107072628 A).
Regarding claims 4 and 24. The combination teaches the method of claim 2, wherein the position information indicates a position of the tracked anatomical area of the patient in a 3D reference volume (see Yao, paragraph , the processing device 120 (e.g., the extraction module 420) (e.g., the processing circuit(s) of the processor 210) may extract image information of the image. The image information of the image may include element values of elements in the image. An element may be a pixel if the image is two-dimensional (2D) or a voxel if the image is three-dimensional (3D)).
However, the combination does not expressly teach wherein the estimated position is based on relative motion of the tracked anatomical area from translation or rotation in a coordinate space of the 3D reference volume.
Lachaine teaches that 2D/3D registration may be performed to find the relative movement between current imaging slice and the reference volume. about the registration technique, 3D image may represent a "moving image", and 2D slice may represent a fixing image. generally, which can identify the position of the target in the 3D image, and such as in a three-dimensional coordinate system, the relative movement is applied to the earlier target position estimate may be provided for updating a position of the target. of this estimation can be executed under the condition does not need to determine deformation of, and even can be executed under the without using a rotation parameter, such as to increase the execution efficiency of the optimization technique. such as when the 3D reference image registration to the one or more 2D imaging slice, can use the simplifying assumption (see page 28, lines 16-25).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Lachaine for providing that , 3D image may represent a "moving image", and 2D slice may represent a fixing image. generally, which can identify the position of the target in the 3D image, and such as in a three-dimensional coordinate system, the relative movement is applied to the earlier target position estimate may be provided for updating a position of the target. of this estimation can be executed under the condition does not need to determine deformation of, and even can be executed under the without using a rotation parameter, such as to increase the execution efficiency of the optimization technique. such as when the 3D reference image registration to the one or more 2D imaging slice, can use the simplifying assumption, as wherein the estimated position is based on relative motion of the tracked anatomical area from translation or rotation in a coordinate space of the 3D reference volume.
Therefore, combining the elements from prior arts according to known methods and technique, such as such as in a three dimensional coordinate system, the relative movement is applied to the earlier target position estimate may be provided for updating a position of the target, would yield predictable results.
Regarding claims 5 and 25. The combination teaches the method of claim 4, wherein the output of the trained generative model provides transformation parameters that indicate the relative motion of the tracked anatomical area of the patient relative to the 3D reference volume (see Ichihashi, Fig. 6, paragraph 62, the machine learning apparatus 4 generates a trained model by training an untrained model based on a plurality of training samples. The untrained model is a neural network in which parameters are set to initial values. Each training sample is constituted by a combination of input data and truth data. As the input data, an MR image of a first time phase (also referred to as an “input MR image”) is used. As the truth data, a tumor position of a second time phase (also referred to as a “truth tumor position”) is used. For example, the first time phase is N % corresponding to a present time phase, and the second time phase is (N+10) % corresponding to a future time phase. The tumor position of the future time phase is identified from actual MR images of future time phases generated by the MRI integrated radiotherapy apparatus 6 performing MR imaging on a patient) (see SEAH, paragraph 107).
Regarding claim 14. The combination teaches the method of claim 1, wherein controlling the radiotherapy treatment session modifies operation of a radiotherapy machine based on motion caused by the estimated position of the tracked anatomical area, including one or more of:
changing a position of a radiotherapy beam from the radiotherapy machine; changing a shape of a radiotherapy beam from the radiotherapy machine; or
controlling a radiotherapy from the radiotherapy machine (see Ichihashi, Fig. 5, paragraph 59, the MRI integrated radiotherapy apparatus 6 performs MR imaging on a patient, identifies a position of a tumor by image processing, and controls irradiation in accordance with the position of the tumor, during irradiation).
However, the combination does not expressly teach gating a radiotherapy beam.
Lachaine teaches that at 1310, can use the position of the update information of the determined target 1308 to update the treatment site (e.g., by a radiation treatment beam targeting area or volume). Additionally, or alternatively, optionally, at 1312, the treatment beam steering can be gated, such that when properly aligned with the updating of the estimated or target position treatment allows for treatment delivery. In this way, it can use the imaging slice and earlier collected volume reference image for adapting to control the radiation therapy (see Fig. 13, page 33, lines 21-26).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Lachaine for providing to use the position of the update information of the determined target 1308 to update the treatment site (e.g., by a radiation treatment beam targeting area or volume) and , the treatment beam steering can be gated, such that when properly aligned with the updating of the estimated or target position treatment allows for treatment delivery, as gating a radiotherapy beam. Therefore, combining the elements from prior arts according to known methods and technique, such gating the treatment beam, would yield predictable results.
Claim(s) 32-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ichihashi (PGPUB: 20210290977 A1) in view of Yao (EP 3869460 A1), in view of SEAH (WO 2022006621 A1) and further in view of Yan (WO 2019019188 A1).
Regarding claims 32 and 33. The combination teaches the method of claim 2,
wherein the position information comprises data representing locations of the tracked anatomical area, and wherein the position information is distinct from the image data captured during the radiotherapy treatment session (see Ichihashi, Fig. 4, paragraph 55, the trained model is a neural network trained to, in response to an input of an MR image of a present time phase, output a tumor position of a future time phase, which is a predetermined time phase after the present time phase; The tumor position of the future time phase may be output as a sequence of image coordinates of the pixels of the tumor region, as image data in which the tumor region is rendered, or as a sequence of real coordinates of the pixels of the tumor region . The output form is not limited to the above, and the tumor position of the future time phase may be output in any form as long as it can be recognized).
However, the combination does not expressly teach that spatial coordinate data representing locations of the tracked anatomical area.
Yan teaches that determining a first reference image sequence and a second reference according to the 4D image of the tumor region Test the image sequence; the 4D image of the tumor region is an image sequence including spatial position coordinates and time information of the tumor region. For example, determining the first collection point and the second collection point, that is, determining the position of the collection point relative to the tumor (see page 20, lines 14-19).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Yan to obtain the 4D image of the tumor region is an image sequence including spatial position coordinates and time information of the tumor region. For example, determining the first collection point and the second collection point, that is, determining the position of the collection point relative to the tumor, in order to provide spatial coordinate data representing locations of the tracked anatomical area. Therefore, combining the elements from prior arts according to known methods and technique would yield predictable results.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 and 22 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN JIA whose telephone number is (571)270-5536. The examiner can normally be reached 9:00 am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIN JIA/Primary Examiner, Art Unit 2663