DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 20-23, 25, 27, 30-32, 34, and 37-39 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Taki (US 2023/0017704 A1, Jan. 19, 2023) (hereinafter “Taki”).
Regarding claim 20: Taki discloses a non-transitory computer-readable storage medium storing a program causing a computer to execute processes of: acquiring training data including an X-ray image of a target site and information related to an amount of body tissue obtained from a CT (Computed Tomography) image of the target site ([0045]-[0046] - CT image V0 and x-ray images G1 and G2); and generating a learning model configured to output information related to an amount of body tissue of a target site in an X-ray image when the X-ray image is input using acquired training data ([0047], [0058], [0060]-[0063]).
Regarding claim 21: Taki discloses the non-transitory computer-readable storage medium according to claim 20, wherein the program causes the computer to execute a process of generating the learning model configured to output an image representing an amount of the body tissue of the target site when the X-ray image is input ([0060], [0099] - display image region 71, fig. 17).
Regarding claim 22: Taki discloses The non-transitory computer-readable storage medium according to claim 20, wherein the program causes the computer to execute processes of: aligning a position of the target site based on the CT image and a position of the target site based on the X-ray image ([0087]-[0089] where image CG is generated based on the CT image V0); and acquiring the training data including information related to an amount of body tissue of the target site obtained from the CT image after alignment ([0089]).
Regarding claim 23: Taki discloses the non-transitory computer-readable storage medium according to claim 20, wherein the program causes the computer to execute processes of: classifying the target site in the CT image into a plurality of regions including a bone region and a muscle region based on the CT image ([0083], [0092]); acquiring the training data including information related to bone density of the classified bone region in the CT image ([0092]-[0093]); and generating the learning model configured to output information related to bone density of the bone region in an X-ray image when the X-ray image is input using the training data ([0047], [0058], [0060]-[0063]).
Regarding claim 25: Taki discloses the non-transitory computer-readable storage medium according to claim 22, wherein the program causes the computer to execute processes of: specifying a bone region in the CT image and a bone region in the X-ray image ([0090], [0075]); generating, based on the CT image, a CT image of the bone region viewed in a direction matching a capturing direction of the bone region in the X-ray image ([0087]-[0089] where image CG is generated based on the CT image V0); and by aligning a position of a bone region in the generated CT image and a position of the bone region in the X-ray image, aligning the position of the target site based on the CT image and the position of the target site based on the X-ray image ([0089]).
Regarding claim 27: Taki discloses the non-transitory computer-readable storage medium according to claim 20, wherein the program causes the computer to execute processes of: specifying a projection condition that maximizes a correlation value between an image obtained by projecting a bone region included in the target site in the CT image and a bone region included in the target site in the X-ray image ([0093]-[0094], [0103]); and acquiring the training data including information related to an amount of body tissue of the target site obtained from a projection image obtained by projecting the target site in the CT image under a specified projection condition ([0093]-[0094], [0103]).
Regarding claim 30: Taki discloses a non-transitory computer-readable storage medium storing a program causing a computer to execute processes of: acquiring an X-ray image of a target site ([0045]-[0046] - CT image V0 and x-ray images G1 and G2); and inputting the acquired X-ray image to a learning model trained using training data including an X-ray image of a target site and information related to an amount of body tissue obtained from a CT image of the target site and configured to output information related to an amount of body tissue of a target site in an X-ray image when the X-ray image is input, thereby outputting information related to an amount of body tissue of the target site ([0047], [0058], [0060]-[0063]).
Regarding claim 31: Taki discloses the non-transitory computer-readable storage medium according to claim 30, wherein the output information related to the amount of body tissue is an image representing an amount of body tissue of the target site ([0060], [0099] - display image region 71, fig. 17).
Regarding claim 32: Taki discloses the non-transitory computer-readable storage medium according to claim 30, wherein the output information related to the amount of body tissue is bone density of the target site or muscle mass of the target site ([0060], [0099] - bone density display region 72, fig. 17).
Regarding claim 34: Taki discloses the non-transitory computer-readable storage medium according to claim 30, wherein the learning model is trained using the training data including information related to an amount of body tissue of the target site obtained from a CT image generated for the bone region viewed in a direction matching a capturing direction of a bone region specified in the X-ray image before alignment in which a position of the target site based on the generated CT image is aligned with a position of the target site based on the X-ray image by aligning a bone region in the generated CT image with the bone region in the X-ray image ([0087]-[0089]).
Regarding claim 37: Taki discloses an information processing apparatus comprising a control unit, wherein the control unit is configured to: acquire an X-ray image of a target site ([0045]-[0046] - CT image V0 and x-ray images G1 and G2); and input the acquired X-ray image to a learning model trained using training data including an X-ray image of a target site and information related to an amount of body tissue obtained from a CT image of the target site and configured to output information related to an amount of body tissue of a target site in an X-ray image when the X-ray image is input, thereby outputting information related to an amount of body tissue of the target site ([0047], [0058], [0060]-[0063]).
Regarding claim 38: Taki discloses an information processing apparatus comprising a control unit, wherein the control unit is configured to: acquire an X-ray image of a target site ([0045]-[0046] - CT image V0 and x-ray images G1 and G2); and input the acquired X-ray image to a learning model trained using training data including an X-ray image of a target site and information related to an amount of body tissue obtained from a CT image of the target site and configured to output information related to an amount of body tissue of a target site in an X-ray image when the X-ray image is input, thereby outputting information related to an amount of body tissue of the target site ([0047], [0058], [0060]-[0063]).
Regarding claim 39: Taki discloses the information processing method according to claim 37, wherein the learning model is trained using the training data including information related to an amount of body tissue of the target site obtained from a CT image generated for the bone region viewed in a direction matching a capturing direction of a bone region specified in the X-ray image before alignment in which a position of the target site based on the generated CT image is aligned with a position of the target site based on the X-ray image by aligning a bone region in the generated CT image with the bone region in the X-ray image ([0087]-[0089]).
Allowable Subject Matter
Claims 24, 26, 28-29, 33, and 35-36 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: the prior art fails to anticipate or render obvious every limitation of the identified claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hiasa (US 2024/0122556 A1) discloses segmenting soft tissue and bone from pseudo X-ray images using a neural network trained on CT images.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAROLYN A PEHLKE whose telephone number is (571)270-3484. The examiner can normally be reached 9:00am - 5:00pm (Central Time), Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAROLYN A PEHLKE/Primary Examiner, Art Unit 3799