DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application claims priority to foreign application with application number JP 2023-064794 dated 12 March 2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDSs dated 8 April 2024, 23 May 2024 and 17 October 2024 have been considered and placed in the application file.
Specification - Title
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: Neural Network using reliability data to improve outcomes.
1st Claim Interpretation
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification.
The following terms in the claims have been given the following interpretations in light of the specification:
goodness-of-fit, Claim 1, 5-7, 9-10, 12 and 14-17: paragraphs [0022], Specifically, the goodness of fit relating to the ground truth data is an indicator (level of reliability) representing how reliable the correct output (endocardial contour point) indicated by the ground truth data in each training data pair can be as the data indicating the actual region of the object.
Thus, a goodness-of-fit is a reliability indicator that a given pixel is part of an object (such as a heart). While the definition indicates an application as a heart object or medical object identifier, because there is no language in the claims restricting the application, an attempt has been made to use medical references. However, other references within image analysis may apply. This definition is used for purposes of searching for prior art, but cannot be incorporated into the claims.
Should applicant wish different definitions, Applicant should point to the portions of the specification that clearly show a different definition.
2nd Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
a training data acquisition unit configured to acquire in claim 1;
a goodness-of-fit acquisition unit configured to acquire in claim 1; and
a learning unit configured to perform training in claim 1.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 9 reciting “based on information other than the training data relating to the object” are rejected under 35 U.S.C. 112(d) as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Applicant may cancel the claims, amend the claims to place the claims in proper dependent form, rewrite the claims in independent form, or present a sufficient showing that the dependent claims complies with the statutory requirements.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The USPTO “Interim Guidelines for Examination of Patent Applications for Patent Subject Matter Eligibility” (Official Gazette notice of 23 February 2010), Annex IV, reads as follows:
The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. § 101 as covering both non-statutory subject matter and statutory subject matter. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 U.S.C. § 101 in this situation, the USPTO suggests the following approach.
A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation "non-transitory" to the claim. Cf. Animals - Patentability, 1077 Off. Gaz. Pat. Office 24 (April 21, 1987) (suggesting that applicants add the limitation "non-human" to a claim covering a multi-cellular organism to avoid a rejection under 35 U.S.C. § 101). Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. The limited situations in which such an amendment could raise issues of new matter occur, for example, when the specification does not support a non-transitory embodiment because a signal per se is the only viable embodiment such that the amended claim is impermissibly broadened beyond the supporting disclosure. See, e.g., Gentry Gallery, Inc. v. Berkline Corp., 134 F.3d 1473(Fed. Cir. 1998).
Claim(s) 1 and 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. Claim 1 defines a “information processing apparatus” embodying functional descriptive material. However, the claim does not define a non-transitory computer-readable medium or memory and is thus non-statutory for that reason (i.e., “examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01.
Dependent claims 2-11 and 13-15 are also rejected as depending on claim 1, also reciting a “information processing apparatus” embodying functional descriptive material.
When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. See In see Official Gazette Notice 1351 OG212, February 23,2010). That is, the scope of the presently claimed “computer program product ” typically covers forms of non-transitory tangible media and transitory propagating signals per se. The examiner suggests amending the claim to embody the program on a “computer readable medium” and adding the limitation ”non-transitory ” to the claim or equivalent in order to make the claim statutory. Any amendment to the claim should be commensurate with its corresponding disclosure.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-19 (all claims) are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2025 0331976 A1, (Uemura et al.) The reference is listed in a PTO-892.
Claim 1
[AltContent: textbox (Uemura et al. Fig. 3a, showing using a neural network to segment an image using training images.)]
PNG
media_image1.png
369
488
media_image1.png
Greyscale
Regarding Claim 1, Uemura et al. teach an information processing apparatus for generating a learning model that performs, by using input image data obtained by imaging an object, estimation relating to the object rendered in the image data ("X-ray image as input and outputs information," paragraph [0046]), the information processing apparatus comprising at least one processor ("the information processing apparatus 10 performs machine learning in advance to learn predetermined training data, and prepares a learning model 12M," paragraph [0046]) capable of causing the information processing apparatus to function as:
a training data acquisition unit configured to acquire, as training data used for generating the learning model, learning image data obtained by imaging the object ("the information processing apparatus 10 inputs the frontal hip joint X-ray image to the learning model 12M, thereby acquiring information on the bone density of the proximal femur from the learning model 12M," paragraph [0046]) and ground truth data indicating information about the object ("The learning model 12Mb of this embodiment is generated by being trained using training data in which a training X-ray image (frontal hip joint X-ray image), a DRR image of a gluteus maximus muscle which is a ground truth," paragraph [0149]) in the learning image data;
a goodness-of-fit acquisition unit configured to acquire goodness of fit relating to the ground truth data ("the control unit 11 reads a pair of an X-ray image (frontal hip joint X-ray image) and a CT image from the medical image DB 12a, performs a luminance value calibration process on the read CT image, and then classifies each pixel in the CT image as a bone region, a muscle region, and another region (musculoskeletal region)," paragraph [0101] where goodness of fit is interpreted as classifies each pixel as a region and "defining achievement of alignment between a contour in a DRR image (here, pseudo DRR image) of a target site (bone region, here, pelvis) generated from a 3D region of the target site in a CT image and a contour of a target site in an actual X-ray image," paragraph [0128]); and
a learning unit configured to perform training on the learning model based on the training data and the goodness of fit ("The bone density estimation learning model 12Ml may be trained by another training device," paragraph [0087]).
It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Uemura et al. explicitly motivates doing so at least in paragraphs [0044], [0157] and [0158] including “It is to be noted that the disclosed embodiment is illustrative and not restrictive in all aspects. The scope of the present invention is defined by the appended claims rather than by the description preceding them, and all changes that fall within metes and bounds of the claims, or equivalence of such metes and bounds thereof” and otherwise motivating experimentation and optimization.
The rejection of apparatus claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 12, method claim 16 and method claim 17 while noting that the rejection above cites to both device and method disclosures. Claims 12, 16 and 17 are mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Uemura et al. teach the information processing apparatus according to claim 1, wherein
the learning model is a learning model for estimating spatial information about the object in the image data ("the learning model 12M of this embodiment is trained using training data that allows for spatial correspondence between abundant 3D data obtained by CT and an X-ray image with high accuracy," paragraph [0071]), and
the training data acquisition unit acquires information about a region of the object as ground truth data in the training data ("12Ml is generated by preparing training data that associates a training DRR image with training (ground truth) bone density, and using this training data to machine-train the untrained learning model 12Ml," paragraph [0086]).
Claim 3
Regarding claim 3, Uemura et al. teach the information processing apparatus according to claim 1, wherein
the learning model is a learning model for estimating a position of a feature point of the object in the image data ("the learning model 12M of this embodiment is trained using training data that allows for spatial correspondence between abundant 3D data obtained by CT and an X-ray image with high accuracy," paragraph [0071]), and
the training data acquisition unit acquires information about the position of the feature point of the object as ground truth data in the training data ("a difference ( error, loss related to muscle mass) between the muscle mass which is the ground truth and the muscle mass based on the DRR image of the muscle region generated by the learning model 12Mb can be fed back. Therefore, according to the training process of this embodiment, the learning model 12Mb is generated to predict a DRR image and muscle mass of a muscle region included in an X-ray image with high accuracy when the X-ray image is input," paragraph [0150] where a region with high accuracy teaches position of the feature point).
Claim 4
Regarding claim 4, Uemura et al. teach the information processing apparatus according to claim 1, wherein
the learning model is a learning model for estimating a contour of the object in the image data ("the control unit 11 calculates a correlation value between the contour of the region of interest in the X-ray image specified in step S111 and the contour of the region of interest in the pseudo DRR image," paragraph [0127]), and
the training data acquisition unit acquires information about the contour of the object as ground truth data in the training data ("the control unit 11 specifies a projection condition in the pseudo DRR image maximizing the correlation value between the contour of the bone regions in the X-ray image and the contours of the pseudo DRR image by using the CMA-ES in the processing of steps S122 to S125," paragraph [0135] and "Then, the control unit 11 extracts a half-section image including the left proximal femur from the frontal hip joint X-ray image acquired in step S11 (S18), and stores, as training data, the extracted X-ray image (half-section image of the frontal hip joint X-ray image) and the DRR image of the region of interest generated in step S17 in association with each other in the training DB 12b (S19)," paragraph [0136]).
Claim 5
Regarding claim 5, Uemura et al. teach the information processing apparatus according to claim 1, wherein
the goodness-of-fit acquisition unit calculates the goodness of fit, based on pixel values of a periphery of a position of the object in the learning image data, the position of the object being indicated by the ground truth data in the training data ("the control unit 11 reads a pair of an X-ray image (frontal hip joint X-ray image) and a CT image from the medical image DB 12a, performs a luminance value calibration process on the read CT image, and then classifies each pixel in the CT image as a bone region, a muscle region, and another region (musculoskeletal region)," paragraph [0101] where goodness of fit is interpreted as classifies each pixel as a region and "defining achievement of alignment between a contour in a DRR image (here, pseudo DRR image) of a target site (bone region, here, pelvis) generated from a 3D region of the target site in a CT image and a contour of a target site in an actual X-ray image," paragraph [0128]).
Claim 6
Regarding claim 6, Uemura et al. teach the information processing apparatus according to claim 5, wherein
The goodness-of-fit acquisition unit calculates the goodness of fit, based on a luminance gradient indicated by the pixel values ("the control unit 11 detects a luminance gradient (edge) of the image based on the pixel value of each pixel, and specifies a capturing target in the X-ray image based on the detected luminance gradient," paragraph [0064]).
Claim 7
Regarding claim 7, Uemura et al. teach the information processing apparatus according to claim 3, wherein the goodness-of-fit acquisition unit calculates the goodness of fit of an individual feature point of the ground truth data in the training data, based on a positional relationship between the individual feature point and a feature point other than the individual feature point ("the control unit 11 reads a pair of an X-ray image (frontal hip joint X-ray image) and a CT image from the medical image DB 12a, performs a luminance value calibration process on the read CT image, and then classifies each pixel in the CT image as a bone region, a muscle region, and another region (musculoskeletal region)," paragraph [0101] where goodness of fit is interpreted as classifies each pixel as a region and "defining achievement of alignment between a contour in a DRR image (here, pseudo DRR image) of a target site (bone region, here, pelvis) generated from a 3D region of the target site in a CT image and a contour of a target site in an actual X-ray image," paragraph [0128]).
Claim 8
Regarding claim 8, Uemura et al. teach the information processing apparatus according to claim 7, wherein the positional relationship is a curvature of a contour line of the object based on the feature points ( "defining achievement of alignment between a contour in a DRR image (here, pseudo DRR image) of a target site (bone region, here, pelvis) generated from a 3D region of the target site in a CT image and a contour of a target site in an actual X-ray image," paragraph [0128]).
Claim 9
Regarding claim 9, Uemura et al. teach the information processing apparatus according to claim 1, wherein the goodness-of-fit acquisition unit calculates the goodness of fit, based on information other than the training data relating to the object ("the control unit 11 reads a pair of an X-ray image (frontal hip joint X-ray image) and a CT image from the medical image DB 12a, performs a luminance value calibration process on the read CT image, and then classifies each pixel in the CT image as a bone region, a muscle region, and another region (musculoskeletal region)," paragraph [0101] where goodness of fit is interpreted as classifies each pixel as a region).
Claim 10
Regarding claim 10, Uemura et al. teach the information processing apparatus according to claim 1, wherein the learning unit performs training on the learning model by applying the goodness of fit to a difference between an estimated value relating to the object, which is estimated by the learning model, and a correct value relating to the object, which is indicated by the ground truth data ("the screen illustrated in FIG. 8 displays, as the test results for the bone density based on the frontal hip joint X-ray image, a predicted DRR image, a name of a target site in the predicted DRR image (left proximal femur in FIG. 8), bone density estimated from the predicted DRR image," paragraph [0076]).
Claim 11
Regarding claim 11, Uemura et al. teach the information processing apparatus according to claim 10, wherein the estimated value and the correct value are a pixel value of a pixel corresponding to the object ("Note that the control unit 11 may calculate the muscle mass for each pixel based on each pixel value in the predicted DRR image, and may calculate the muscle mass in the muscle region by integrating the muscle masses corresponding to each pixel. Furthermore, the control unit 11 may predict the muscle mass of the entire body of the subject based on the muscle mass in each muscle region. For example, by registering the muscle mass of each muscle of the subject, such as the gluteus maximus muscle, gluteus medius muscle, and hamstrings, in association with the muscle mass of the entire body of the subject, the muscle mass of the entire body of the subject can be predicted from the muscle mass of each muscle estimated from the predicted DRR image," paragraph [0105] where estimating is predicting).
Claim 12
Regarding claim 12, Uemura et al. teach an information processing apparatus, comprising at least one processor capable of causing the information processing apparatus ("the information processing apparatus 10 performs machine learning in advance to learn predetermined training data, and prepares a learning model 12M," paragraph [0046]) to function as:
a data acquisition unit configured to acquire input image data obtained by imaging an object ("X-ray image as input and outputs information," paragraph [0046]);
a learning model acquisition unit configured to acquire a learning model generated by learning based on learning image data obtained by imaging the object ("the information processing apparatus 10 inputs the frontal hip joint X-ray image to the learning model 12M, thereby acquiring information on the bone density of the proximal femur from the learning model 12M," paragraph [0046]) , ground truth data indicating information about the object in the learning image data, and goodness of fit relating to the ground truth data ("the control unit 11 reads a pair of an X-ray image (frontal hip joint X-ray image) and a CT image from the medical image DB 12a, performs a luminance value calibration process on the read CT image, and then classifies each pixel in the CT image as a bone region, a muscle region, and another region (musculoskeletal region)," paragraph [0101] where goodness of fit is interpreted as classifies each pixel as a region and "defining achievement of alignment between a contour in a DRR image (here, pseudo DRR image) of a target site (bone region, here, pelvis) generated from a 3D region of the target site in a CT image and a contour of a target site in an actual X-ray image," paragraph [0128]); and
an estimation unit configured to perform an estimation process relating to the object rendered in the input image data by using the input image data and the learning model ("Note that the control unit 11 may calculate the muscle mass for each pixel based on each pixel value in the predicted DRR image, and may calculate the muscle mass in the muscle region by integrating the muscle masses corresponding to each pixel. Furthermore, the control unit 11 may predict the muscle mass of the entire body of the subject based on the muscle mass in each muscle region. For example, by registering the muscle mass of each muscle of the subject, such as the gluteus maximus muscle, gluteus medius muscle, and hamstrings, in association with the muscle mass of the entire body of the subject, the muscle mass of the entire body of the subject can be predicted from the muscle mass of each muscle estimated from the predicted DRR image," paragraph [0105] where estimating is predicting).
Claim 13
Regarding claim 13, Uemura et al. teach the information processing apparatus according to claim 12, wherein the at least one processor causes the information processing apparatus to further function as a display processing unit configured to display an estimation result of the estimation unit (" The control unit 11 stores test results including the calculated muscle density and muscle mass of each muscle in, for example, the electronic medical record data (S45), generates a test result screen illustrated in FIG. 17, and outputs the test result screen to the display unit 15 (S46)," paragraph [0106]).
Claim 14
Regarding claim 14, Uemura et al. teach the information processing apparatus according to claim 13, wherein the at least one processor causes the information processing apparatus to further function as a goodness-of-fit acquisition unit configured to acquire goodness of fit relating to the input image data ("the control unit 11 reads a pair of an X-ray image (frontal hip joint X-ray image) and a CT image from the medical image DB 12a, performs a luminance value calibration process on the read CT image, and then classifies each pixel in the CT image as a bone region, a muscle region, and another region (musculoskeletal region)," paragraph [0101] where goodness of fit is interpreted as classifies each pixel as a region and "defining achievement of alignment between a contour in a DRR image (here, pseudo DRR image) of a target site (bone region, here, pelvis) generated from a 3D region of the target site in a CT image and a contour of a target site in an actual X-ray image," paragraph [0128]), and
the display processing unit displays the goodness of fit relating to the input image data (" The control unit 11 stores test results including the calculated muscle density and muscle mass of each muscle in, for example, the electronic medical record data (S45), generates a test result screen illustrated in FIG. 17, and outputs the test result screen to the display unit 15 (S46)," paragraph [0106]).
Claim 15
Regarding claim 15, Uemura et al. teach the information processing apparatus according to claim 12, wherein the learning model is generated by learning based on a result obtained by applying the goodness of fit to a difference between an estimation result of the estimation unit and the ground truth data ("the control unit 11 reads a pair of an X-ray image (frontal hip joint X-ray image) and a CT image from the medical image DB 12a, performs a luminance value calibration process on the read CT image, and then classifies each pixel in the CT image as a bone region, a muscle region, and another region (musculoskeletal region)," paragraph [0101] where goodness of fit is interpreted as classifies each pixel as a region and "defining achievement of alignment between a contour in a DRR image (here, pseudo DRR image) of a target site (bone region, here, pelvis) generated from a 3D region of the target site in a CT image and a contour of a target site in an actual X-ray image," paragraph [0128]).
Claim 16
Regarding claim 16, Uemura et al. teach an information processing method for generating a learning model that performs, by using input image data obtained by imaging an object, estimation relating to the object rendered in the image data, ("the information processing apparatus 10 performs machine learning in advance to learn predetermined training data, and prepares a learning model 12M," paragraph [0046]) the information processing method comprising:
a training data acquisition step of acquiring, as training data used for generating the learning model, learning image data obtained by imaging the object ("the information processing apparatus 10 inputs the frontal hip joint X-ray image to the learning model 12M, thereby acquiring information on the bone density of the proximal femur from the learning model 12M," paragraph [0046]) and ground truth data indicating information about the object in the learning image data "The learning model 12Mb of this embodiment is generated by being trained using training data in which a training X-ray image (frontal hip joint X-ray image), a DRR image of a gluteus maximus muscle which is a ground truth," paragraph [0149]) ;
a goodness-of-fit acquisition step of acquiring goodness of fit relating to the ground truth data ("the control unit 11 reads a pair of an X-ray image (frontal hip joint X-ray image) and a CT image from the medical image DB 12a, performs a luminance value calibration process on the read CT image, and then classifies each pixel in the CT image as a bone region, a muscle region, and another region (musculoskeletal region)," paragraph [0101] where goodness of fit is interpreted as classifies each pixel as a region and "defining achievement of alignment between a contour in a DRR image (here, pseudo DRR image) of a target site (bone region, here, pelvis) generated from a 3D region of the target site in a CT image and a contour of a target site in an actual X-ray image," paragraph [0128]); and
a learning step of performing training on the learning model, based on the training data and the goodness of fit ("The bone density estimation learning model 12Ml may be trained by another training device," paragraph [0087]).
Claim 17
Regarding claim 17, Uemura et al. teach an information processing method ("the information processing apparatus 10 performs machine learning in advance to learn predetermined training data, and prepares a learning model 12M," paragraph [0046]) , comprising:
a data acquisition step of acquiring input image data obtained by imaging an object ("X-ray image as input and outputs information," paragraph [0046]);
a learning model acquisition step that acquires a learning model generated by learning, based on learning image data obtained by imaging the object, ground truth data indicating information about the object in the learning image data, and goodness of fit relating to the ground truth data ("The learning model 12Mb of this embodiment is generated by being trained using training data in which a training X-ray image (frontal hip joint X-ray image), a DRR image of a gluteus maximus muscle which is a ground truth," paragraph [0149]); and
an estimation step of performing an estimation process relating to the object rendered in the input image data by using the input image data and the learning model ("Note that the control unit 11 may calculate the muscle mass for each pixel based on each pixel value in the predicted DRR image, and may calculate the muscle mass in the muscle region by integrating the muscle masses corresponding to each pixel. Furthermore, the control unit 11 may predict the muscle mass of the entire body of the subject based on the muscle mass in each muscle region. For example, by registering the muscle mass of each muscle of the subject, such as the gluteus maximus muscle, gluteus medius muscle, and hamstrings, in association with the muscle mass of the entire body of the subject, the muscle mass of the entire body of the subject can be predicted from the muscle mass of each muscle estimated from the predicted DRR image," paragraph [0105] where estimating is predicting).
Claim 18
Regarding claim 18, Uemura et al. teach a non-transitory computer-readable storage medium with an executable program stored thereon, that when executed, instructs a processor to perform the method of claim 16 ("information processing apparatus 10 is an apparatus capable of processing various types of information and transmitting and receiving information, and is, for example, a personal computer, a server computer, a workstation, etc. The information processing apparatus 10 is installed and used in medical institutions, testing institutions, research institutions, etc," paragraph [0045]).
Claim 19
Regarding claim 19, Uemura et al. teach a non-transitory computer-readable storage medium with an executable program stored thereon, that when executed, instructs a processor to perform the method of claim 17 ("information processing apparatus 10 is an apparatus capable of processing various types of information and transmitting and receiving information, and is, for example, a personal computer, a server computer, a workstation, etc. The information processing apparatus 10 is installed and used in medical institutions, testing institutions, research institutions, etc," paragraph [0045]).
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent Publication 2022 0237801 A1 to Kaufman et al. discloses generating a multiclass image segmentation model(s) can include receiving multiple single-class image datasets, receiving a target mask for each of the single-class image datasets, receiving a condition of an object associated with each of the single-class image datasets, and generating the multiclass image segmentation model(s) based on the single-class image datasets, the target masks, and the identification of the target objects.
US Patent 11,538,163 B1 to Li et al. discloses detecting aortic aneurysms using ensemble based deep learning techniques that utilize numerous computed tomography (CT) scans collected from numerous de-identified patients in a database. The system includes software that automates the analysis of a series of CT scans as input (in DICOM file format) and provides output in two dimensions: (1) ranking CT scans by risks of adverse events from aortic aneurysm, (2) providing aortic aneurysm size estimates.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Heath E. Wells/Examiner, Art Unit 2664
Date: 10 March 2026