DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/07/2024, 05/07/2024, and 07/11/2024 has/have been considered by the examiner.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because it has phrases “The present disclosure relates to”, “the disclosure relates to”, and “all the embodiments refer to”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a 3D scanning reconstruction system configured to” in claim 2, “said quality module is configured to” in claim 8, “training module configured to” in claims 10-11.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12020449. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims 1-20 of instant application 18657087 can be anticipated by claims 1-20 of U.S. Patent No. 12020449 respectively.
Instant Application 18657087
U.S. Patent No. 12020449
1. A patient motion tracking system for automatic generation of a region of interest on a 3D surface of a patient, the system comprising: a memory comprising stored region of interest (ROI) descriptive data; a 3D surface generation processor configured to obtain a 3D surface comprising at least a target area; a ROI generation processor configured to utilize said stored ROI descriptive data and said 3D surface to generate a ROI labelled 3D surface, wherein the system is configured to use said ROI labelled 3D surface to track motion of a patient during positioning and/or treatment of said patient in the treatment room.
1. A patient motion tracking system for automatic generation of a region of interest on a 3D surface of a patient, the system comprising: a memory comprising stored region of interest (ROI) descriptive data; a 3D surface generation processor configured to obtain a 3D surface comprising at least a target area; a ROI generation processor configured to utilize said stored ROI descriptive data and said 3D surface to output a ROI labelled 3D surface to a motion tracking module, wherein said ROI labelled 3D surface is utilized by the motion tracking module to track motion of a patient during positioning and/or treatment of said patient in the treatment room.
2. The system according to claim 1, wherein the system comprises a 3D scanning reconstruction system configured to be arranged in a radiotherapy treatment room and configured to generate an input surface, the 3D surface being generated from said input surface.
2. System according to claim 1, wherein the system comprises a 3D scanning reconstruction system configured to be arranged in a radiotherapy treatment room and configured to generate an input surface, the 3D surface being generated from said input surface.
3. The system according to claim 2, wherein the input surface is configured as a series of 2D image frames of at least said target area of said patient and said 3D surface generation processor is configured to generate from said 2D image frames said 3D surface.
3. System according to claim 2, wherein the input surface is configured as a series of 2D image frames of at least said target area of said patient and said 3D surface generation processor is configured to generate from said 2D image frames said 3D surface.
4. The system according to claim 3, wherein the system furthermore comprises one or more cameras configured to be arranged in the radiotherapy treatment room and to obtain said series of 2D image frames of at least the target area of the patient.
4. System according to claim 3, wherein the system furthermore comprises one or more cameras configured to be arranged in the radiotherapy treatment room and to obtain said series of 2D image frames of at least the target area of the patient.
5. The system according to claim 1, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
9. System according to claim 1, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
6. The system according to claim 5, wherein the annotated reference ROI is based on the identification of one or more landmarks applied to each of reference surfaces, wherein the landmarks represent uniquely identifiable portions of the reference target area surface.
10. System according to claim 9, wherein the annotated reference ROI is based on the identification of one or more landmarks applied to each of reference surfaces, wherein the landmarks represent uniquely identifiable portions of the reference target area surface.
7. The system according to claim 1, wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
16. System according to claim 1, wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface
8. The system according to claim 1, wherein said ROI labelled 3D surface is read into a quality module of said system, wherein said quality module is configured to estimate one or more geometric measurements of the 3D data in said ROI labelled 3D surface and to compare said estimated geometric measurements with one or more set thresholds.
17. System according to claim 1, wherein said ROI labelled 3D surface is read into a quality module of said system, wherein said quality module is configured to estimate one or more geometric measurements of the 3D data in said ROI labelled 3D surface and to compare said estimated geometric measurements with one or more set thresholds.
9. The system according to claim 1, wherein the ROI descriptive data comprises a template surface and a template ROI, wherein said template surface and template ROI are input to said ROI generation processor, which is configured to align and warp said template ROI and template surface with the 3D surface to create at least a warped ROI, and subsequently to transfer said warped ROI to said 3D surface.
18. System according to claim 1, wherein the ROI descriptive data comprises a template surface and a template ROI, wherein said template surface and template ROI are input to said ROI generation processor, which is configured to align and warp said template ROI and template surface with the 3D surface to create at least a warped ROI, and subsequently to transfer said warped ROI to said 3D surface.
10. The system according to claim 9, wherein the system furthermore comprises a training module configured to generate and output to said memory, the template surface and the template ROI.
19. System according to claim 18, wherein the system furthermore comprises a training module configured to generate and output to said memory, the template surface and the template ROI.
11. The system according to claim 10 wherein said training module comprises two or more reference target surfaces each having an annotated reference ROI applied thereto, wherein the training module is configured to align the two or more reference surfaces and subsequently to calculate an average of said aligned reference surfaces to produce said template surface, and calculate an average of said annotated ROIs to produce said template ROI.
20. System according to claim 19 wherein said training module comprises two or more reference target surfaces each having an annotated reference ROI applied thereto, wherein the training module is configured to align the two or more reference surfaces and subsequently to calculate an average of said aligned reference surfaces to produce said template surface, and calculate an average of said annotated ROIs to produce said template ROI.
12. The system according to claim 5, wherein the ROI descriptive data is configured as a ROI model, which ROI model is trained in a machine learning processor prior to being stored in said memory, wherein the ROI model is trained on the basis of said one or more reference target surfaces each having an annotated reference ROI applied thereto.
11. System according to claim 9, wherein the ROI descriptive data is configured as a ROI model, which ROI model is trained in a machine learning processor prior to being stored in said memory, wherein the ROI model is trained on the basis of said one or more reference target surfaces each having an annotated reference ROI applied thereto.
13. The system according to claim 12, wherein the ROI generation processor is configured to utilize the 3D surface as input to said ROI model, and to output said ROI labelled 3D surface to a motion tracking module.
12. System according to claim 11, wherein the ROI generation processor is configured to utilize the 3D surface as input to said ROI model, and to output said ROI labelled 3D surface to said motion tracking module.
14. The system according to claim 12, wherein said reference surfaces are configured as depth map and normal map representations of the reference surfaces, and wherein the ROI model is configured to utilize the 3D surface and to classify in said depth map and normal map representations, vertices as being inside or outside of the region of interest as defined by the trained model.
13. System according to claim 11, wherein said reference surfaces are configured as depth map and normal map representations of the reference surfaces, and wherein the ROI model is configured to utilize the 3D surface and to classify in said depth map and normal map representations, vertices as being inside or outside of the region of interest as defined by the trained model.
15. The system according to claim 5, wherein said ROI generation processor furthermore comprises a landmark generation model, wherein said landmark generation model is trained in a machine learning processor prior to being stored in said ROI generation processor, wherein the landmark generation model is trained on the basis of said one or more reference target surfaces each having annotated landmarks applied thereto.
14. System according to claim 9, wherein said ROI generation processor furthermore comprises a landmark generation model, wherein said landmark generation model is trained in a machine learning processor prior to being stored in said ROI generation processor, wherein the landmark generation model is trained on the basis of said one or more reference target surfaces each having annotated landmarks applied thereto.
16. The system according to claim 15, wherein the landmark generation model outputs a representation of landmarks onto the input surface, thereby creating a landmark labelled 3D surface, wherein the landmark labelled 3D surface is utilized in said ROI generation processor together with a template surface and a template ROI to align and warp said template ROI and template surface with the landmark labelled 3D surface to create an aligned and warped ROI, and subsequently to transfer said warped ROI to said 3D surface to output the ROI labelled 3D surface.
15. System according to claim 14, wherein the landmark generation model outputs a representation of landmarks onto the input surface, thereby creating a landmark labelled 3D surface, wherein the landmark labelled 3D surface is utilized in said ROI generation processor together with a template surface and a template ROI to align and warp said template ROI and template surface with the landmark labelled 3D surface to create an aligned and warped ROI, and subsequently to transfer said warped ROI to said 3D surface to output the ROI labelled 3D surface.
17. The system according to claim 2, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
7. System according to claim 2, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
18. The system according to claim 3, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
6. System according to claim 3, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
19. The system according to claim 4, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
5. System according to claim 4, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
20. The system according to claim 2, wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
8. System according to claim 2, wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
Claims 1 and 11-12 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-3 of U.S. Patent No. 11688083. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims 1 and 11-12 of instant application 18657087 can be anticipated by claims 1-3 of U.S. Patent No. 11688083 respectively.
Instant Application 18657087
U.S. Patent No. 11688083
1. A patient motion tracking system for automatic generation of a region of interest on a 3D surface of a patient, the system comprising: a memory comprising stored region of interest (ROI) descriptive data; a 3D surface generation processor configured to obtain a 3D surface comprising at least a target area; a ROI generation processor configured to utilize said stored ROI descriptive data and said 3D surface to generate a ROI labelled 3D surface, wherein the system is configured to use said ROI labelled 3D surface to track motion of a patient during positioning and/or treatment of said patient in the treatment room.
1. A patient motion tracking system for automatic generation of a region of interest on a 3D surface of a patient positioned in a radiotherapy treatment room, the system comprising: a memory comprising stored region of interest (ROI) descriptive data; a training module, including at least one processor, configured to generate a template surface and template ROI from the stored ROI descriptive data; a 3D surface generation processor configured to utilize an input surface, and generate a 3D surface from said input surface, wherein the 3D surface comprises at least a target area of the input surface; a ROI generation processor configured to utilize said template surface and template ROI generated by said training module and said 3D surface to output a ROI labelled 3D surface to a display and a motion tracking module, wherein said ROI labelled 3D surface is utilized by the motion tracking module to track motion of a patient during positioning and/or treatment of said patient in the treatment room.
9. The system according to claim 1, wherein the ROI descriptive data comprises a template surface and a template ROI, wherein said template surface and template ROI are input to said ROI generation processor, which is configured to align and warp said template ROI and template surface with the 3D surface to create at least a warped ROI, and subsequently to transfer said warped ROI to said 3D surface.
10. The system according to claim 9, wherein the system furthermore comprises a training module configured to generate and output to said memory, the template surface and the template ROI.
11. The system according to claim 10 wherein said training module comprises two or more reference target surfaces each having an annotated reference ROI applied thereto, wherein the training module is configured to align the two or more reference surfaces and subsequently to calculate an average of said aligned reference surfaces to produce said template surface, and calculate an average of said annotated ROIs to produce said template ROI.
3. System according to claim 1, wherein the ROI descriptive data comprises two or more reference target surfaces each having an annotated reference ROI applied thereto, and the training module is configured to align the two or more reference surfaces and subsequently to calculate an average of said aligned reference surfaces to produce said template surface, and calculate an average of said annotated ROIs to produce said template ROI.
5. The system according to claim 1, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
12. The system according to claim 5, wherein the ROI descriptive data is configured as a ROI model, which ROI model is trained in a machine learning processor prior to being stored in said memory, wherein the ROI model is trained on the basis of said one or more reference target surfaces each having an annotated reference ROI applied thereto.
2. System according to claim 1, wherein the ROI descriptive data is configured as a ROI model, which ROI model is trained in a machine learning processor of said training module on the basis of one or more reference target surfaces each having an annotated reference ROI applied thereto.
Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 11250579. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims 1-20 of instant application 18657087 can be anticipated by claims 1-20 of U.S. Patent No. 11250579 respectively.
Instant Application 18657087
U.S. Patent No. 11250579
1. A patient motion tracking system for automatic generation of a region of interest on a 3D surface of a patient, the system comprising: a memory comprising stored region of interest (ROI) descriptive data; a 3D surface generation processor configured to obtain a 3D surface comprising at least a target area; a ROI generation processor configured to utilize said stored ROI descriptive data and said 3D surface to generate a ROI labelled 3D surface, wherein the system is configured to use said ROI labelled 3D surface to track motion of a patient during positioning and/or treatment of said patient in the treatment room.
1. A patient motion tracking system for automatic generation of a region of interest on a 3D surface of a patient positioned in a radiotherapy treatment room, the system comprising: a memory comprising stored region of interest (ROI) descriptive data; a 3D surface generation processor configured to utilize an input surface, and generate a 3D surface from said input surface, wherein the 3D surface comprises at least a target area of the input surface; a ROI generation processor configured to utilize said stored ROI descriptive data and said 3D surface to output a ROI labelled 3D surface to a display and a motion tracking module, wherein said ROI labelled 3D surface is utilized by the motion tracking module to track motion of a patient during positioning and/or treatment of said patient in the treatment room.
2. The system according to claim 1, wherein the system comprises a 3D scanning reconstruction system configured to be arranged in a radiotherapy treatment room and configured to generate an input surface, the 3D surface being generated from said input surface.
2. System according to claim 1, wherein the system comprises a 3D scanning reconstruction system configured to be arranged in the radiotherapy treatment room and configured to generate said input surface.
3. The system according to claim 2, wherein the input surface is configured as a series of 2D image frames of at least said target area of said patient and said 3D surface generation processor is configured to generate from said 2D image frames said 3D surface.
5. System according to claim 1, wherein the input surface is configured as a series of 2D image frames of at least said target area of said patient and said 3D surface generation processor is configured to generate from said 2D image frames said 3D surface.
4. The system according to claim 3, wherein the system furthermore comprises one or more cameras configured to be arranged in the radiotherapy treatment room and to obtain said series of 2D image frames of at least the target area of the patient.
6. System according to claim 5, wherein the system furthermore comprises one or more cameras configured to be arranged in the radiotherapy treatment room and to obtain said series of 2D image frames of at least the target area of the patient.
5. The system according to claim 1, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
9. System according to claim 1, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
6. The system according to claim 5, wherein the annotated reference ROI is based on the identification of one or more landmarks applied to each of reference surfaces, wherein the landmarks represent uniquely identifiable portions of the reference target area surface.
10. System according to claim 9, wherein the annotated reference ROI is based on the identification of one or more landmarks applied to each of reference surfaces, wherein the landmarks represent uniquely identifiable portions of the reference target area surface.
7. The system according to claim 1, wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
16. System according to claim 1, wherein said ROI labelled 3D surface is configured to be input to said display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
8. The system according to claim 1, wherein said ROI labelled 3D surface is read into a quality module of said system, wherein said quality module is configured to estimate one or more geometric measurements of the 3D data in said ROI labelled 3D surface and to compare said estimated geometric measurements with one or more set thresholds.
17. System according to claim 1, wherein said ROI labelled 3D surface is read into a quality module of said system, wherein said quality module is configured to estimate one or more geometric measurements of the 3D data in said ROI labelled 3D surface and to compare said estimated geometric measurements with one or more set thresholds.
9. The system according to claim 1, wherein the ROI descriptive data comprises a template surface and a template ROI, wherein said template surface and template ROI are input to said ROI generation processor, which is configured to align and warp said template ROI and template surface with the 3D surface to create at least a warped ROI, and subsequently to transfer said warped ROI to said 3D surface.
18. System according to claim 1, wherein the ROI descriptive data comprises a template surface and a template ROI, wherein said template surface and template ROI are input to said ROI generation processor, which is configured to align and warp said template ROI and template surface with the 3D surface to create at least a warped ROI, and subsequently to transfer said warped ROI to said 3D surface.
10. The system according to claim 9, wherein the system furthermore comprises a training module configured to generate and output to said memory, the template surface and the template ROI.
19. System according to claim 18, wherein the system furthermore comprises a training module configured to generate and output to said memory, the template surface and the template ROI.
11. The system according to claim 10 wherein said training module comprises two or more reference target surfaces each having an annotated reference ROI applied thereto, wherein the training module is configured to align the two or more reference surfaces and subsequently to calculate an average of said aligned reference surfaces to produce said template surface, and calculate an average of said annotated ROIs to produce said template ROI.
20. System according to claim 19 wherein said training module comprises two or more reference target surfaces each having an annotated reference ROI applied thereto, wherein the training module is configured to align the two or more reference surfaces and subsequently to calculate an average of said aligned reference surfaces to produce said template surface, and calculate an average of said annotated ROIs to produce said template ROI.
12. The system according to claim 5, wherein the ROI descriptive data is configured as a ROI model, which ROI model is trained in a machine learning processor prior to being stored in said memory, wherein the ROI model is trained on the basis of said one or more reference target surfaces each having an annotated reference ROI applied thereto.
11. System according to claim 9, wherein the ROI descriptive data is configured as a ROI model, which ROI model is trained in a machine learning processor prior to being stored in said memory, wherein the ROI model is trained on the basis of said one or more reference target surfaces each having an annotated reference ROI applied thereto.
13. The system according to claim 12, wherein the ROI generation processor is configured to utilize the 3D surface as input to said ROI model, and to output said ROI labelled 3D surface to a motion tracking module.
12. System according to claim 11, wherein the ROI generation processor is configured to utilize the 3D surface as input to said ROI model, and to output said ROI labelled 3D surface to said display and/or said motion tracking module.
14. The system according to claim 12, wherein said reference surfaces are configured as depth map and normal map representations of the reference surfaces, and wherein the ROI model is configured to utilize the 3D surface and to classify in said depth map and normal map representations, vertices as being inside or outside of the region of interest as defined by the trained model.
13. System according to claim 11, wherein said reference surfaces are configured as depth map and normal map representations of the reference surfaces, and wherein the ROI model is configured to utilize the 3D surface and to classify in said depth map and normal map representations, vertices as being inside or outside of the region of interest as defined by the trained model.
15. The system according to claim 5, wherein said ROI generation processor furthermore comprises a landmark generation model, wherein said landmark generation model is trained in a machine learning processor prior to being stored in said ROI generation processor, wherein the landmark generation model is trained on the basis of said one or more reference target surfaces each having annotated landmarks applied thereto.
14. System according to claim 9, wherein said ROI generation processor furthermore comprises a landmark generation model, wherein said landmark generation model is trained in a machine learning processor prior to being stored in said ROI generation processor, wherein the landmark generation model is trained on the basis of said one or more reference target surfaces each having annotated landmarks applied thereto.
16. The system according to claim 15, wherein the landmark generation model outputs a representation of landmarks onto the input surface, thereby creating a landmark labelled 3D surface, wherein the landmark labelled 3D surface is utilized in said ROI generation processor together with a template surface and a template ROI to align and warp said template ROI and template surface with the landmark labelled 3D surface to create an aligned and warped ROI, and subsequently to transfer said warped ROI to said 3D surface to output the ROI labelled 3D surface.
15. System according to claim 14, wherein the landmark generation model outputs a representation of landmarks onto the input surface, thereby creating a landmark labelled 3D surface, wherein the landmark labelled 3D surface is utilized in said ROI generation processor together with a template surface and a template ROI to align and warp said template ROI and template surface with the landmark labelled 3D surface to create an aligned and warped ROI, and subsequently to transfer said warped ROI to said 3D surface to output the ROI labelled 3D surface.
17. The system according to claim 2, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
3. System according to claim 2, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto
18. The system according to claim 3, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
8. System according to claim 5, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
19. The system according to claim 4, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
7. System according to claim 6, wherein the stored ROI descriptive data comprises one or more reference surfaces each having an annotated reference ROI applied thereto.
20. The system according to claim 2, wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
4. System according to claim 2, wherein said ROI labelled 3D surface is configured to be input to said display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the treatment room" in line 10. There is insufficient antecedent basis for this limitation in the claim.
Claim 6 recites the limitations "the identification" in line 2 and “the reference target area surface” in line 3. There is insufficient antecedent basis for this limitation in the claim.
Claims 2-5 and 7-20 are also rejected under 35 U.S.C. 112(b) as being dependent upon a rejected base claim.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lachaine (US 20190080459 A1).
-Regarding claim 1, Lachaine discloses a patient motion tracking system ([0022], “Control console 110 … perform functions or operations such as treatment planning, treatment execution, image acquisition, image processing, motion tracking”) for automatic generation of a region of interest on a 3D surface of a patient ([0040], “control console 110 may determine the motion of the anatomical region of interest by matching the set(s) of contour elements … to the 3D surface image”), the system comprising (Abstract; FIGS. 1-6): a memory comprising stored region of interest (ROI) descriptive data (FIG. 4; [0040], “contour(s) to the 3D surface image of target 320 acquired prior to the treatment session”; Note: contours has to be stored in a memory in order to later determine the motion of the anatomical region of interest as shown in FIGS. 1, 6); a 3D surface generation processor configured to obtain a 3D surface comprising at least a target area (FIGS. 1, 3, 5; [0019], “A target may include an organ, a tumor, an anomaly, or an anatomical structure that is subject to or related to radiotherapy”; [0020], “matched to a 3D surface image of the target”; [0034]); a ROI generation processor configured (FIG. 1) to utilize said stored ROI descriptive data (FIG. 5; FIG.6, steps 620-640) and said 3D surface to generate a ROI labelled 3D surface (FIG. 6, steps 610-640), wherein the system is configured to use said ROI labelled 3D surface to track motion of a patient during positioning and/or treatment of said patient in the treatment room (FIGS. 1-2; FIG. 6, steps 640-650; [0039]-[0040]; [0048]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lachaine (US 20190080459 A1) in view of Lampotang et al (US 20220133284 A1), hereinafter Lampotang.
-Regarding claim 2, Lachaine discloses the method of claim 1.
Lachaine does not disclose wherein the system comprises a 3D scanning reconstruction system configured to be arranged in a radiotherapy treatment room and configured to generate an input surface, the 3D surface being generated from said input surface.
In the same field of endeavor, Lampotang teaches a guidance and tracking system that facilitate templated and targeted biopsy and/or treatment (Lampotang: Abstract; FIGS. 1-27). Lampotang further teaches wherein the system comprises a 3D scanning reconstruction system configured to be arranged in a radiotherapy treatment room and configured to generate an input surface, the 3D surface being generated from said input surface (Lampotang: FIGS. 1-2, 6, 8; [0014]; [0050]; [0077], “The computing device 115 can be configured to … generate a three-dimensional reconstruction … of an organ”).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Lachaine with the teaching of Lampotang by using 3D scanning reconstruction system in order to provide more accurate model for motion tracking of the patient (Lampotang: [0061]).
-Regarding claim 3, Lachaine in view of Lampotang teaches the method of claim 2.
Lachaine does not disclose that 3D surface is reconstructed from 2D image frames.
In the same field of endeavor, Lampotang teaches a guidance and tracking system that facilitate templated and targeted biopsy and/or treatment (Lampotang: Abstract; FIGS. 1-27). Lampotang further teaches that 3D surface is reconstructed from 2D image frames (Lampotang: FIGS. 1-2; [0052], “the three-dimensional reconstruction may be generated by … multiple two-dimensional (2D) images”; [0086]; [0092]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Lachaine with the teaching of Lampotang by using 3D scanning reconstruction system in order to provide more accurate model for motion tracking of the patient (Lampotang: [0061]).
-Regarding claim 4, Lachaine in view of Lampotang teaches the method of claim 3. The combination further teaches wherein the system furthermore comprises one or more cameras configured to be arranged in the radiotherapy treatment room and to obtain said series of 2D image frames of at least the target area of the patient (Lampotang: FIGS. 1-2; [0051]).
Claim(s) 5 and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lachaine (US 20190080459 A1) in view of Bharat et al (US 20160016007 A1), hereinafter Bharat.
-Regarding claim 5, Lachaine discloses the method of claim 1 and discloses herein the stored ROI descriptive data comprises one or more reference surfaces (FIG. 4; [0040], “contour(s) to the 3D surface image of target 320 acquired prior to the treatment session”).
Lachaine does not disclose the one or more reference surfaces each having an annotated reference ROI applied thereto.
In the same field of endeavor, Bharat teaches a method for surface tracking-based motion management (Bharat: Abstract; FIGS. 1-3). Bharat further teaches the one or more reference surfaces each having an annotated reference ROI applied thereto (Bharat: FIG. 1; [0025], “delineates one or more regions of interest (ROIs) in the image, such as a target and/or OARs. The ROIs are typically delineated with contours tracing the boundaries of the ROIs in the image. Delineation can be performed automatically …”; [0026]; [0031]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Lachaine with the teaching of Bharat by having an annotated reference ROI applied to each of the one or more reference surfaces in order to more accurate determine the motion of the anatomical region of interest.
-Regarding claim 7, Lachaine discloses the method of claim 1.
Lachaine does not disclose wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
In the same field of endeavor, Bharat teaches a method for surface tracking-based motion management (Bharat: Abstract; FIGS. 1-3). Bharat further teaches wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface (Bharat: FIG. 1; [0025]-[0026]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to combine the teaching of Lachaine with the teaching of Bharat by adjusting said region of interest in order to more accurate determine the motion of the anatomical region of interest.
Claim(s) 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lachaine (US 20190080459 A1) in view of Lampotang et al (US 20220133284 A1), hereinafter Lampotang, and further in view of Bharat et al (US 20160016007 A1), hereinafter Bharat.
-Regarding claim 17, Lachaine in view of Lampotang teaches the method of claim 2.
Lachaine in view of Lampotang does not teach the one or more reference surfaces each having an annotated reference ROI applied thereto.
However, Bharat is an analogous art pertinent to the problem to be solved in this application and teaches a method for surface tracking-based motion management (Bharat: Abstract; FIGS. 1-3). Bharat further teaches the one or more reference surfaces each having an annotated reference ROI applied thereto (Bharat: FIG. 1; [0025], “delineates one or more regions of interest (ROIs) in the image, such as a target and/or OARs. The ROIs are typically delineated with contours tracing the boundaries of the ROIs in the image. Delineation can be performed automatically …”; [0026]; [0031]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Lachaine in view of Lampotang with the teaching of Bharat by having an annotated reference ROI applied to each of the one or more reference surfaces in order to more accurate determine the motion of the anatomical region of interest.
-Regarding claim 18, Lachaine in view of Lampotang teaches the method of claim 3.
Lachaine in view of Lampotang does not teach the one or more reference surfaces each having an annotated reference ROI applied thereto.
However, Bharat is an analogous art pertinent to the problem to be solved in this application and teaches a method for surface tracking-based motion management (Bharat: Abstract; FIGS. 1-3). Bharat further teaches the one or more reference surfaces each having an annotated reference ROI applied thereto (Bharat: FIG. 1; [0025], “delineates one or more regions of interest (ROIs) in the image, such as a target and/or OARs. The ROIs are typically delineated with contours tracing the boundaries of the ROIs in the image. Delineation can be performed automatically …”; [0026]; [0031]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Lachaine in view of Lampotang with the teaching of Bharat by having an annotated reference ROI applied to each of the one or more reference surfaces in order to more accurate determine the motion of the anatomical region of interest.
-Regarding claim 19, Lachaine in view of Lampotang teaches the method of claim 4.
Lachaine in view of Lampotang does not teach the one or more reference surfaces each having an annotated reference ROI applied thereto.
However, Bharat is an analogous art pertinent to the problem to be solved in this application and teaches a method for surface tracking-based motion management (Bharat: Abstract; FIGS. 1-3). Bharat further teaches the one or more reference surfaces each having an annotated reference ROI applied thereto (Bharat: FIG. 1; [0025], “delineates one or more regions of interest (ROIs) in the image, such as a target and/or OARs. The ROIs are typically delineated with contours tracing the boundaries of the ROIs in the image. Delineation can be performed automatically …”; [0026]; [0031]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Lachaine in view of Lampotang with the teaching of Bharat by having an annotated reference ROI applied to each of the one or more reference surfaces in order to more accurate determine the motion of the anatomical region of interest.
-Regarding claim 20, Lachaine in view of Lampotang teaches the method of claim 2.
Lachaine in view of Lampotang does not teach wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface.
However, Bharat is an analogous art pertinent to the problem to be solved in this application and teaches a method for surface tracking-based motion management (Bharat: Abstract; FIGS. 1-3). Bharat further teaches wherein said ROI labelled 3D surface is configured to be input to a display, wherein said display is configured to allow a user to adjust said region of interest via control inputs to the ROI generation processor, wherein the control inputs utilizes an adjustment of at least the borders of the ROI label of the ROI labelled 3D surface (Bharat: FIG. 1; [0025]-[0026]).
Therefore, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify the teaching of Lachaine in view of Lampotang with the teaching of Bharat by adjusting said region of interest in order to more accurate determine the motion of the anatomical region of interest.
Allowable Subject Matter
Claims 6 and 8-16 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, and overcome claim rejections in above sections of “Double Patenting” and “claim Rejections - 35 USC § 112”.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAO LIU whose telephone number is (571)272-4539. The examiner can normally be reached Monday-Thursday and Alternate Fridays 8:30-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAO LIU/Primary Examiner, Art Unit 2664