DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. JP2023-026390 and No. JP2023-129516, filed on 02/22/2023 and 08/08/2023, respectively.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/13/2024, 03/07/2024, and 06/18/2024 have been considered by the examiner.
Status of Claims
Currently pending Claim(s):
1-28
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“obtaining unit” in claims 1, 7, 18, 21, and 24.
“classification unit” in claims 1, 2, 3, 7, 18, 19, 21, and 22.
“registration unit” in claims 1, 3, 7, 11, 12, 15, 16, 18, 19, 22, 23, and 25.
“association unit” in claims 9 and 10.
“generation unit” in claims 13, 14, and 17.
“display control unit” in claim 17.
“determination unit” in claim 20.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
Claims 1, 7, 18, 21, and 24: “obtaining unit” corresponds to Figure 1, element 102. “An identification information obtaining unit 102 obtains identification information for identifying each of a plurality of bones from the input medical images.” (Application Pub paragraph [0031]).
Claims 1, 2, 3, 7, 18, 19, 21, and 22: “classification unit” corresponds to Figure 1, element 103. “Based on the identification information of the plurality of bones obtained from the identification information obtaining unit 102, a classification unit 103 classifies the plurality of bones into a plurality of groups of bones associated with the motions of a plurality of different parts (body parts) of a subject.” (Application Pub paragraph [0031]).
Claims 1, 3, 7, 11, 12, 15, 16, 18, 19, 22, 23, and 25: “registration unit” corresponds to Figure 1, element 103. “A registration unit 104 performs registration processing between the first medical image and the second medical image for each group of bones, and calculates a displacement field between the images for each group of bones.” (Application Pub paragraph [0031]).
Claims 9 and 10: “association unit” corresponds to Figure 5, element 201. “The association unit 201 performs processing of associating information of a plurality of groups of bones set in one of two input images, that is, a first medical image and a second medical image with the other image.” (Application Pub paragraph [0104])
Claims 13, 14, and 17: “generation unit” corresponds to Figure 1, element 105. “An image generation unit 105 generates, as a result image, a difference image between the first medical image and a deformed image of the second medical image based on a plurality of obtained displacement fields.” (Application Pub paragraph [0031]).
Claim 17: “display control unit” corresponds to Figure 1, element 106. “A display control unit 106 performs display control for causing the display unit 150 to display the generated difference image or deformed image. “ (Application Pub paragraph [0031]).
Claim 20: “determination unit: corresponds to Figure 4, element 401. “The determination unit 401 obtains determination information that determines whether there is a high possibility that the groups of bones of the limbs classified by the classification unit 103 are normally classified.” (Application Pub paragraph [0138]).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., abstract idea - mental process) without significantly more.
Step (1) Are the claims directed to a process, machine, manufacture, or composition of matter;
Step (2A) Prong One: Are the claims directed to a judicially recognized exception, i.e., a law of nature, a natural phenomenon, or an abstract idea;
Prong Two: If the claims are directed to a judicial exception under Prong One, then is the judicial exception integrated into a practical application;
Step (2B) If the claims are directed to a judicial exception and do not integrate the judicial exception, do the claims provide an inventive concept.
Step 1:
Claim 1 recites an apparatus. Therefore, the claim is directed to the statutory categories of machine.
Step (2A):
Prong One:
Claim 1 recites:
“obtain at least one of first identification information that identifies a plurality of first bones depicted in a first image obtained by capturing an image of a subject and second identification information that identifies a plurality of second bones depicted in a second image obtained by capturing an image of the subject;”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of identifying a plurality of bones in a first image and a second image which is practically capable of being performed in the human mind with the assistance of pen and paper.
“classify, using the at least one piece of identification information, the plurality of first bones and the plurality of second bones into a first bone group including a bone that moves in association with a motion of a first part of the subject and a second bone group including a bone that moves in association with a motion of a second part different from the first part;”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of classifying a plurality of bones in a first group and a second group in association with a motion of the subject which is practically capable of being performed in the human mind with the assistance of pen and paper.
Prong Two:
This judicial exception is not integrated into a practical application. The additional elements of “An image processing apparatus” and “an obtaining unit” and “a classification unit” and “a registration unit configured to perform first registration between the plurality of first bones and the plurality of second bones classified into the first bone group and second registration between the plurality of first bones and the plurality of second bones classified into the second bone group.” amount to no more than mere necessary data gathering and applying because, under its broadest reasonable interpretation, it is simply using generic hardware to perform the abstract idea.
Thus, they are insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application and the claims are thus directed to the abstract idea.
Step (2B):
Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations “An image processing apparatus” and “an obtaining unit” and “a classification unit” and “a registration unit configured to perform first registration between the plurality of first bones and the plurality of second bones classified into the first bone group and second registration between the plurality of first bones and the plurality of second bones classified into the second bone group.” amount to no more than mere data gathering with general purpose hardware and provide no inventive concept. These elements, individually and in combination, are well-understood, routine, conventional activity. As such, the claim is ineligible.
Step 1:
Claims 2-8, 10-17, and 20-25 recite an apparatus. Claim 9 recites a method. Therefore, the claims are directed to the statutory categories of machine and process, respectively.
Step (2A):
Prong One:
Claims 2-8, 10-17, and 19-25 merely narrow the previously recited abstract idea limitations. For the reasons described above, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than the mental process that are practically capable of being performed in the human mind with the assistance of pen and paper.
Prong Two:
These judicial exceptions are not integrated into a practical application nor includes additional elements that are sufficient to amount to significantly more. Thus, the claims are ineligible.
Step 1:
Claim 9 recites an apparatus. Therefore, the claim is directed to the statutory categories of machine.
Step (2A):
Prong One:
Claim 9 recites:
“associate regions of the plurality of groups of bones classified in one image of the first image and the second image with regions in the other image,” and “associates regions of a plurality of groups of bones in the one image classified based on the identification information with regions in the other image.”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of associating regions of bones classified in one image to the regions of bones classified in the other region based on the identification information which is practically capable of being performed in the human mind with the assistance of pen and paper.
Prong Two:
This judicial exception is not integrated into a practical application. The additional element of “association unit” amount to no more than mere necessary data gathering and applying because, under its broadest reasonable interpretation, it is simply using generic hardware to perform the abstract idea.
Step (2B):
Claim 9 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations “association unit” amount to no more than mere general purpose hardware and provide no inventive concept. This element, individually and in combination, is well-understood, routine, conventional activity. As such, the claim is ineligible.
Step 1:
Claim 18 recites an apparatus. Therefore, the claim is directed to the statutory categories of machine.
Step (2A):
Prong One:
Claim 18 recites:
“obtain identification information that identifies a plurality of bones depicted in each of a first image obtained by capturing an image of a subject and a second image obtained by capturing an image of the subject;”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of identifying a plurality of bones in a first image and a second image which is practically capable of being performed in the human mind with the assistance of pen and paper.
“classify, using the identification information, the plurality of bones into a first bone group including a bone that moves in association with a motion of a first part of the subject;”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of classifying a plurality of bones in a first group and a second group in association with a motion of the subject which is practically capable of being performed in the human mind with the assistance of pen and paper.
Prong Two:
This judicial exception is not integrated into a practical application. The additional elements of “An image processing apparatus” and “an obtaining unit” and “a classification unit” and “a registration unit configured to perform first registration between the plurality of bones depicted in the first image and the plurality of bones depicted in the second image, which are classified into the first bone group.” amount to no more than mere necessary data gathering and applying because, under its broadest reasonable interpretation, it is simply using generic hardware to perform the abstract idea.
Thus, they are insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application and the claims are thus directed to the abstract idea.
Step (2B):
Claim 18 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations “An image processing apparatus” and “an obtaining unit” and “a classification unit” and “a registration unit configured to perform first registration between the plurality of bones depicted in the first image and the plurality of bones depicted in the second image, which are classified into the first bone group.” amount to no more than mere data gathering with general purpose hardware and provide no inventive concept. These elements, individually and in combination, are well-understood, routine, conventional activity. As such, the claim is ineligible.
Step 1:
Claim 26 recites a method. Therefore, the claim is directed to the statutory categories of method.
Step (2A):
Prong One:
Claim 26 recites:
“obtaining at least one of first identification information that identifies a plurality of first bones depicted in a first image obtained by capturing an image of a subject and second identification information that identifies a plurality of second bones depicted in a second image obtained by capturing an image of the subject;”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of identifying a plurality of bones in a first image and a second image which is practically capable of being performed in the human mind with the assistance of pen and paper.
“classifying, using the at least one piece of identification information, the plurality of first bones and the plurality of second bones into a first bone group including a bone that moves in association with a motion of a first part of the subject and a second bone group including a bone that moves in association with a motion of a second part different from the first part;”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of classifying a plurality of bones in a first group and a second group in association with a motion of the subject which is practically capable of being performed in the human mind with the assistance of pen and paper.
Prong Two:
This judicial exception is not integrated into a practical application. The additional elements of “performing first registration between the plurality of first bones and the plurality of second bones classified into the first bone group and second registration between the plurality of first bones and the plurality of second bones classified into the second bone group.” amount to no more than mere necessary data gathering and applying because, under its broadest reasonable interpretation, it is simply using generic hardware to perform the abstract idea.
Thus, they are insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application and the claims are thus directed to the abstract idea.
Step (2B):
Claim 26 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations “performing first registration between the plurality of first bones and the plurality of second bones classified into the first bone group and second registration between the plurality of first bones and the plurality of second bones classified into the second bone group.” amount to no more than mere data gathering with general purpose hardware and provide no inventive concept. These elements, individually and in combination, are well-understood, routine, conventional activity. As such, the claim is ineligible.
Step 1:
Claim 27 recites a method. Therefore, the claim is directed to the statutory categories of process.
Step (2A):
Prong One:
Claim 27 recites:
“obtaining identification information that identifies a plurality of bones depicted in each of a first image obtained by capturing an image of a subject and a second image obtained by capturing an image of the subject;”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of identifying a plurality of bones in a first image and a second image which is practically capable of being performed in the human mind with the assistance of pen and paper.
“classifying, using the identification information, the plurality of bones into a first bone group including a bone that moves in association with a motion of a first part of the subject;”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of classifying a plurality of bones in a first group and a second group in association with a motion of the subject which is practically capable of being performed in the human mind with the assistance of pen and paper.
Prong Two:
This judicial exception is not integrated into a practical application. The additional elements of “performing first registration between the plurality of bones depicted in the first image and the plurality of bones depicted in the second image, which are classified into the first bone group.” amount to no more than mere necessary data gathering and applying because, under its broadest reasonable interpretation, it is simply using generic hardware to perform the abstract idea.
Thus, they are insignificant extra-solution activity. Even when viewed in combination, these additional elements do not integrate the abstract idea into a practical application and the claims are thus directed to the abstract idea.
Step (2B):
Claim 18 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The limitations “performing first registration between the plurality of bones depicted in the first image and the plurality of bones depicted in the second image, which are classified into the first bone group.” amount to no more than mere data gathering with general purpose hardware and provide no inventive concept. These elements, individually and in combination, are well-understood, routine, conventional activity. As such, the claim is ineligible.
Claim 28 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 28 recites a “storage medium storing a program”. The claim does not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to a signal per se. Although the specification sates “Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’)” (Paragraph [0209]) the language is non-assertive. Therefore, the claim language needs to be modified to state the storage medium is non-transitory.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-28 are rejected under 35 U.S.C. 103 as being unpatentable over Karino (US 2017/0091919 A1) in view of Pradhan et al. ("Classification of human bones using deep convolutional neural network." IOP conference series: materials science and engineering. Vol. 594. No. 1. IOP Publishing, 2019.) (hereinafter, Pradhan).
Regarding claim 1, Karino discloses an image processing apparatus comprising (Paragraph [0003] “The present invention relates to an image registration device, method, and non-transitory computer readable recording medium storing a program for performing registration between two images obtained by imaging a subject, which is configured to include parts of a plurality of bones, at different points in time.”):
an obtaining unit configured to obtain at least one of first identification information that identifies a plurality of first bones depicted in a first image obtained by capturing an image of a subject and second identification information that identifies a plurality of second bones depicted in a second image obtained by capturing an image of the subject (Paragraph [0045] “two three-dimensional images 6 are acquired by imaging the vertebral column of a patient at different points in time, and a difference image between the two three-dimensional images 6 is generated. As the two three-dimensional images 6 captured at different points in time, the three-dimensional image 6 captured in the past and the current three-dimensional image 6 captured this time may be acquired, or the two three-dimensional images 6 captured in the past may be acquired. In the present embodiment, it is assumed that the past three-dimensional image 6 and the current three-dimensional image 6 are acquired, and the past three-dimensional image 6 is referred to as a first three-dimensional image (corresponding to a first image of the invention) and the current three-dimensional image 6 is referred to as a second three-dimensional image (corresponding to a second image of the invention).”; Paragraph [0051] “The identification unit 11 performs processing for identifying a plurality of vertebrae that from the vertebral column included in each of the first three-dimensional image and the second three-dimensional image...”);
Figure 2:
PNG
media_image1.png
336
488
media_image1.png
Greyscale
; and
a registration unit configured to perform first registration between the plurality of first bones and the plurality of second bones [classified into the first bone group] and second registration between the plurality of first bones and the plurality of second bones [classified into the second bone group] (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC).”; Paragraph [0053] “The registration processing unit 13 performs registration processing on images of the vertebral regions, which match each other as shown in FIG. 2, for each combination of vertebral regions. First, the registration processing unit 13 sets a landmark for each vertebral region included in each of the first and second three-dimensional images”).
However, Karino fails to teach a classification unit configured to classify, using the at least one piece of identification information, the plurality of first bones and the plurality of second bones into a first bone group including a bone that moves in association with a motion of a first part of the subject and a second bone group including a bone that moves in association with a motion of a second part different from the first part.
Pradhan teaches a classification unit configured to classify, using the at least one piece of identification information (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”), the plurality of first bones and the plurality of second bones (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”) into a first bone group including a bone that moves in association with a motion of a first part of the subject and a second bone group including a bone that moves in association with a motion of a second part different from the first part (Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include a classification unit configured to classify, using the at least one piece of identification information, the plurality of first bones and the plurality of second bones into a first bone group including a bone that moves in association with a motion of a first part of the subject and a second bone group including a bone that moves in association with a motion of a second part different from the first part taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 1.
Regarding claim 2, which claim 1 is incorporated, Karino discloses wherein using the at least one piece of identification information, the classification unit identifies a bone commonly depicted between the first image and the second image (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC).”.
However, Karino fails to teach a common group in which the commonly depicted bone exists is classified as the first bone group and the second bone group.
Pradhan teaches a common group in which the commonly depicted bone exists is classified as the first bone group and the second bone group (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”; Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”; Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”.).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include a common group in which the commonly depicted bone exists is classified as the first bone group and the second bone group taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 2.
Regarding claim 3, which claim 1 is incorporated, Karino discloses the registration unit performs third registration between the plurality of first bones and the plurality of second bones, which are [classified into the third bone group] (Paragraph [0034] “FIG. 4 is a diagram illustrating a method of generating an image of each vertebral region and performing registration”; Paragraph [0056] “Then, as shown in FIG. 4, for each of the first three-dimensional image and the second three-dimensional image, a three-dimensional image of each vertebral region is generated by extracting each vertebral region and forming one three-dimensional image…a three-dimensional image of each vertebral region generated from the second three-dimensional image that is a current three-dimensional image is set as a fixed image, and a three-dimensional image of each vertebral region generated from the first three-dimensional image that is a three-dimensional image captured in the past is moved and deformed to perform registration.”).
However, Karino fails to teach the classification unit classifies the plurality of first bones and the plurality of second bones into a third bone group including a bone that moves in association with a motion of a third part different from the first part and the second part.
Pradhan teaches the classification unit classifies the plurality of first bones and the plurality of second bones into a third bone group including a bone that moves in association with a motion of a third part different from the first part and the second part (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”; Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”; Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include the classification unit classifies the plurality of first bones and the plurality of second bones into a third bone group including a bone that moves in association with a motion of a third part different from the first part and the second part taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 3.
Regarding claim 4, which claim 3 is incorporated, Karino discloses wherein the second part and the third part physically continue to the first part, and the second part and the third part are parts that can move independently of each other ([0034] “FIG. 4 is a diagram illustrating a method of generating an image of each vertebral region and performing registration”; Paragraph[0056] “Then, as shown in FIG. 4, for each of the first three-dimensional image and the second three-dimensional image, a three-dimensional image of each vertebral region is generated by extracting each vertebral region and forming one three-dimensional image…a three-dimensional image of each vertebral region generated from the second three-dimensional image that is a current three-dimensional image is set as a fixed image, and a three-dimensional image of each vertebral region generated from the first three-dimensional image that is a three-dimensional image captured in the past is moved and deformed to perform registration.”, Figure 4).
Regarding claim 5, which claim 3 is incorporated, Karino discloses wherein the first part is a trunk of the subject, the second part is one of limbs of the subject, and the third part is one of the limbs of the subject different from the second part (Paragraph [0046] “the three-dimensional image 6 is acquired by imaging the vertebral column of the patient in the present embodiment, imaging targets (subjects) are not limited to the vertebral column, and any imaging target configured to include parts of a plurality of bones may be imaged. For example, ribs configured to include parts of a plurality of left and right bones, bones of the hand configured to include distal phalanx, middle phalanx, proximal phalanx, and metacarpal, bones of the arm configured to include humerus, ulna, and radius, and bones of the leg configured to include femur, patella, tibia, and fibula may also be imaged.”).
Regarding claim 6, which claim 3 is incorporated, Karino discloses wherein the [first bone group] includes a rib of the subject, the [second bone group] includes one of left and right scapula bones of the subject, and the [third bone group] includes the other of the left and right scapula bones of the subject (Paragraph [0046] “Although the three-dimensional image 6 is acquired by imaging the vertebral column of the patient in the present embodiment, imaging targets (subjects) are not limited to the vertebral column, and any imaging target configured to include parts of a plurality of bones may be imaged… ribs configured to include parts of a plurality of left and right bones, bones of the hand configured to include distal phalanx, middle phalanx, proximal phalanx, and metacarpal, bones of the arm configured to include humerus, ulna, and radius, and bones of the leg configured to include femur, patella, tibia, and fibula may also be imaged.”).
However, Karino fails to teach the first bone group, second bone group, and third bone group.
Pradhan teaches the first bone group, second bone group, and third bone group (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”; Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”; Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include the first bone group, second bone group, and third bone group taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 6.
Regarding claim 7, which claim 1 is incorporated, Karino discloses the obtaining unit obtains identification information of a plurality of bones identified from one image of the first image and the second image (Paragraph [0049] “As the three-dimensional image 6, volume data configured to include tomographic images, such as axial tomographic images, sagittal tomographic images, and coronal tomographic images, may be acquired, or a single tomographic image may be acquired.”).
However, Karino fails to teach based on the identification information, the classification unit classifies the plurality of bones identified from the one image, for each part of the subject making a different motion, into a group of bones that move in association with the different motion.
Pradhan teaches based on the identification information, the classification unit classifies the plurality of bones identified from the one image (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”; Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”), for each part of the subject making a different motion, into a group of bones that move in association with the different motion (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective filing date to modify Karino’s reference to include based on the identification information, the classification unit classifies the plurality of bones identified from the one image, for each part of the subject making a different motion, into a group of bones that move in association with the different motion taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 7.
Regarding claim 8, which claim 7 is incorporated, Karino discloses wherein the registration unit obtains bone regions from the first image and the second image based on image processing of extracting a region, and performs registration such that the bone regions match between the first image and the second image (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC)…FIG. 2 is a diagram in which vertebral regions, which match each other between the first three-dimensional image and the second three-dimensional image, are connected to each other by arrows.”).
Regarding claim 9, which claim 7 is incorporated, Karino discloses an association unit configured to associate regions of the plurality of [groups of bones classified in one image] of the first image and the second image with regions in the other image, wherein the association unit associates regions of a [plurality of groups of bones in the one image classified based on the identification information] with regions in the other image (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC)… FIG. 2 is a diagram in which vertebral regions, which match each other between the first three-dimensional image and the second three-dimensional image, are connected to each other by arrows.”).
However, Karino fails to teach plurality of groups of bones in the one image classified based on the identification information.
Pradhan teaches the plurality of groups of bones in the one image classified based on the identification information (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”); Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include the plurality of groups of bones in the one image classified based on the identification information taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 9.
Regarding claim 10, which claim 9 is incorporated, Karino discloses wherein the association unit generates a deformed image by deforming the other image using a displacement field generated in the registration (Paragraph [0056] “…a three-dimensional image of each vertebral region generated from the second three-dimensional image that is a current three-dimensional image is set as a fixed image, and a three-dimensional image of each vertebral region generated from the first three-dimensional image that is a three-dimensional image captured in the past is moved and deformed to perform registration.”) and
associates each of the regions of the [plurality of groups of bones] in the one image with a corresponding region in the deformed image (Paragraph [0056] “Then, as shown in FIG. 4, for each of the first three-dimensional image and the second three-dimensional image, a three-dimensional image of each vertebral region is generated by extracting each vertebral region and forming one three-dimensional image. Then, registration processing between the images of the respective matched vertebral regions is performed. ”).
However, Karino fails to teach plurality of groups of bones.
Pradhan teaches plurality of groups of bones (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include plurality of groups of bones taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 10.
Regarding claim 11, which claim 1 is incorporated, Karino discloses wherein the registration unit performs the first registration based on a first displacement field for making an image of a bone region belonging to the [first bone group] in the first image match an image of a bone region belonging to the [first bone group] in the second image (Paragraph [0056] “Then, as shown in FIG. 4, for each of the first three-dimensional image and the second three-dimensional image, a three-dimensional image of each vertebral region is generated by extracting each vertebral region and forming one three-dimensional image. Then, registration processing between the images of the respective matched vertebral regions is performed. In the present embodiment, a three-dimensional image of each vertebral region generated from the second three-dimensional image that is a current three-dimensional image is set as a fixed image, and a three-dimensional image of each vertebral region generated from the first three-dimensional image that is a three-dimensional image captured in the past is moved and deformed to perform registration.”).
However, Karino fails to teach first bone group.
Pradhan teaches first bone group (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include first bone group taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 11.
Regarding claim 12, which claim 11 is incorporated, Karino discloses wherein the registration unit performs the second registration based on a second displacement field for making an image of a bone region belonging to the [second bone group] in the first image match an image of a bone region belonging to the [second bone group] in the second image (Paragraph [0060] “That is, three registration processes of the registration processing using the landmarks of three points, the rigid registration processing, and the non-rigid registration processing are performed for the three-dimensional image of the vertebral region generated from the first three-dimensional image and the three-dimensional image of the vertebral region generated from the second three-dimensional image.”).
However, Karino fails to teach second bone group.
Pradhan teaches second bone group (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include second bone group taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 12.
Regarding claim 13, which claim 12 is incorporated, Karino discloses an image generation unit (Paragraph [0017] “The image registration device of the invention described above can further comprise a difference image generation unit that generates a difference image between the first and second images having been subjected to the registration processing.”),
wherein the image generation unit sets a region of the displacement field for the first image, stores a displacement amount of the first displacement field in a region of the [first bone group] in the set region (Paragraph [0057] “First, registration is performed using the landmarks of three points set in each of the three-dimensional image of the vertebral region of the first three-dimensional image and the three-dimensional image of the vertebral region of the second three-dimensional image corresponding to the vertebral region. Specifically, the registration is performed by moving the three-dimensional image of the vertebral region so that the distance between the corresponding landmarks is the shortest.”),
stores a displacement amount of the second displacement field in a region of the [second bone group] in the set region (Paragraph [0060] “That is, three registration processes of the registration processing using the landmarks of three points, the rigid registration processing, and the non-rigid registration processing are performed for the three-dimensional image of the vertebral region generated from the first three-dimensional image and the three-dimensional image of the vertebral region generated from the second three-dimensional image. Although the three registration processes are performed as described above in the present embodiment, only the rigid registration processing and the non-rigid registration processing may be performed.”), and
generates an integrated displacement field in which a displacement amount of a different displacement field is stored at a different position in the set region (Paragraph [0061] “In case where producing a difference image, following processing is further processed. Then, the registration processing unit 13 generates a composite image by combining the three-dimensional images of the respective vertebral regions that have been subjected to the three registration processes as described above. Specifically, the composite image is generated by setting an initial value image, which is a three-dimensional image having the same size as the second three-dimensional image and in which all pixel values are zero, and combining the three-dimensional image of each vertebral region of the first three-dimensional image sequentially on the initial value image. The composite image generated by the registration processing unit 13 is output to the difference image generation unit 14.”).
However, Karino fails to teach first bone group and second bone group.
Pradhan teaches first bone group and second bone group (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include first bone group and second bone group taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 13.
Regarding claim 14, which claim 13 is incorporated, Karino discloses wherein the image generation unit generates a deformed image by, using the integrated displacement field, deforming one medical image of the first image and the second image such that the one medical image matches the other medical image (Paragraph [0061] “In case where producing a difference image, following processing is further processed. Then, the registration processing unit 13 generates a composite image by combining the three-dimensional images of the respective vertebral regions that have been subjected to the three registration processes as described above. Specifically, the composite image is generated by setting an initial value image, which is a three-dimensional image having the same size as the second three-dimensional image and in which all pixel values are zero, and combining the three-dimensional image of each vertebral region of the first three-dimensional image sequentially on the initial value image. The composite image generated by the registration processing unit 13 is output to the difference image generation unit 14.”), and
generates a difference image by subtracting pixel information of the deformed image from pixel information of the other medical image (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC).”).
Regarding claim 15, which claim 12 is incorporated, Karino discloses wherein the registration unit associates bone regions identified as identical bone regions between the plurality of first bones and the plurality of second bones based on the first identification information and the second identification information (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC). However, other calculation methods may be used without being limited to this. FIG. 2 is a diagram in which vertebral regions, which match each other between the first three-dimensional image and the second three-dimensional image, are connected to each other by arrows.”).
Regarding claim 16, which claim 15 is incorporated, Karino discloses wherein the registration unit performs rigid body registration between the bone region in the first image and the bone region in the second image, which are associated, using the first displacement field and the second displacement field as initial displacement fields (Paragraph [0060] “That is, three registration processes of the registration processing using the landmarks of three points, the rigid registration processing, and the non-rigid registration processing are performed for the three-dimensional image of the vertebral region generated from the first three-dimensional image and the three-dimensional image of the vertebral region generated from the second three-dimensional image.”), and
performs deformation registration capable of setting a degree of freedom of deformation in the bone region between the bone region in the first image and the bone region in the second image, which have undergone the rigid body registration (Paragraph [0061] “processing is further processed. Then, the registration processing unit 13 generates a composite image by combining the three-dimensional images of the respective vertebral regions that have been subjected to the three registration processes as described above. Specifically, the composite image is generated by setting an initial value image, which is a three-dimensional image having the same size as the second three-dimensional image and in which all pixel values are zero, and combining the three-dimensional image of each vertebral region of the first three-dimensional image sequentially on the initial value image. The composite image generated by the registration processing unit 13 is output to the difference image generation unit 14.”).
Regarding claim 17, which claim 14 is incorporated, Karino discloses a display control unit, wherein the display control unit performs display control for causing a display unit to display at least one image of the deformed image and the difference image generated by the image generation unit (Paragraph [0063] “The display control unit 15 generates a superimposed image by superimposing the difference image generated by the difference image generation unit 14 on the second three-dimensional image, and displays the superimposed image on the display device 3… the display control unit 15 generates a color image by assigning predetermined colors for the difference image and superimposes the color image on the second three-dimensional image, which is a monochrome image, thereby generating a superimposed image. FIG. 5 is a diagram showing an example of a superimposed image. A portion indicated by an arrow in FIG. 5 is an image of bone metastasis appearing on the difference image.”).
Regarding claim 18, Karino discloses an image processing apparatus comprising (Paragraph [0003] The present invention relates to an image registration device, method, and non-transitory computer readable recording medium storing a program for performing registration between two images obtained by imaging a subject, which is configured to include parts of a plurality of bones, at different points in time.):
an obtaining unit configured to obtain identification information that identifies a plurality of bones depicted in each of a first image obtained by capturing an image of a subject and a second image obtained by capturing an image of the subject (Paragraph [0045] “two three-dimensional images 6 are acquired by imaging the vertebral column of a patient at different points in time, and a difference image between the two three-dimensional images 6 is generated. As the two three-dimensional images 6 captured at different points in time, the three-dimensional image 6 captured in the past and the current three-dimensional image 6 captured this time may be acquired, or the two three-dimensional images 6 captured in the past may be acquired. In the present embodiment, it is assumed that the past three-dimensional image 6 and the current three-dimensional image 6 are acquired, and the past three-dimensional image 6 is referred to as a first three-dimensional image (corresponding to a first image of the invention) and the current three-dimensional image 6 is referred to as a second three-dimensional image (corresponding to a second image of the invention).”; Paragraph [0051] “The identification unit 11 performs processing for identifying a plurality of vertebrae that from the vertebral column included in each of the first three-dimensional image and the second three-dimensional image...”; Figure 2); and
a registration unit configured to perform first registration between the plurality of bones depicted in the first image and the plurality of bones depicted in the second image, which are [classified into the first bone group] (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC).”; Paragraph [0053] “The registration processing unit 13 performs registration processing on images of the vertebral regions, which match each other as shown in FIG. 2, for each combination of vertebral regions. First, the registration processing unit 13 sets a landmark for each vertebral region included in each of the first and second three-dimensional images”).
However, Karino fails to teach a classification unit configured to classify, using the identification information, the plurality of bones into a first bone group including a bone that moves in association with a motion of a first part of the subject.
Pradhan teaches a classification unit configured to classify (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”), using the identification information (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”), the plurality of bones into a first bone group including a bone that moves in association with a motion of a first part of the subject (Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”.
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include a classification unit configured to classify, using the identification information, the plurality of bones into a first bone group including a bone that moves in association with a motion of a first part of the subject taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 18.
Regarding claim 19, which claim 18 is incorporated, Karino discloses the registration unit performs second registration between the plurality of bones depicted in the first image and the plurality of bones depicted in the second image, which are [classified into the second bone group] (Paragraph [0060] “That is, three registration processes of the registration processing using the landmarks of three points, the rigid registration processing, and the non-rigid registration processing are performed for the three-dimensional image of the vertebral region generated from the first three-dimensional image and the three-dimensional image of the vertebral region generated from the second three-dimensional image.”).
However, Karino fails to teach the classification unit classifies, using the identification information, the plurality of bones into a second bone group including a bone that moves in association with a motion of a second part different from.
Pradhan teaches the classification unit classifies, using the identification information (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”), the plurality of bones (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”) into a second bone group including a bone that moves in association with a motion of a second part different from the first part (Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”.
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include the classification unit classifies, using the identification information, the plurality of bones into a second bone group including a bone that moves in association with a motion of a second part different from the first part taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 19.
Regarding claim 22, which claim 1 is incorporated, Karino discloses wherein based on [information obtained from the classification unit] (Paragraph [0051] “The identification unit 11 performs processing for identifying a plurality of vertebrae that from the vertebral column included in each of the first three-dimensional image and the second three-dimensional image...”), the registration unit (Paragraph [0053] “The registration processing unit 13 performs registration processing on images of the vertebral regions, which match each other as shown in FIG. 2, for each combination of vertebral regions. First, the registration processing unit 13 sets a landmark for each vertebral region included in each of the first and second three-dimensional images”)
obtains first identification information of a first partial bone, which identifies the first partial bone, and first identification information of a second partial bone, which identifies the second partial bone, the first partial bone and the second partial bone being included in the plurality of first bones and [classified into the second bone group] (Paragraph [0045] “two three-dimensional images 6 are acquired by imaging the vertebral column of a patient at different points in time, and a difference image between the two three-dimensional images 6 is generated. As the two three-dimensional images 6 captured at different points in time, the three-dimensional image 6 captured in the past and the current three-dimensional image 6 captured this time may be acquired, or the two three-dimensional images 6 captured in the past may be acquired. In the present embodiment, it is assumed that the past three-dimensional image 6 and the current three-dimensional image 6 are acquired, and the past three-dimensional image 6 is referred to as a first three-dimensional image (corresponding to a first image of the invention) and the current three-dimensional image 6 is referred to as a second three-dimensional image (corresponding to a second image of the invention).”; Paragraph [0051] “The identification unit 11 performs processing for identifying a plurality of vertebrae that from the vertebral column included in each of the first three-dimensional image and the second three-dimensional image...”; Figure 2), and
obtains second identification information of a first partial bone, which identifies the first partial bone, and second identification information of a second partial bone, which identifies the second partial bone, the first partial bone and the second partial bone being included in the plurality of second bones and [classified into the second bone group] (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC). However, other calculation methods may be used without being limited to this”).
However, Karino fails to teach information obtained from the classification unit and classified into the second bone group.
Pradhan teaches information obtained from the classification unit and classified into the second bone group (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include information obtained from the classification unit and classified into the second bone group taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 22.
Regarding claim 23, which claim 22 is incorporated, Karino discloses wherein between the first image and the second image, the registration unit performs initial registration of the first partial bone based on the first identification information of the first partial bone and the second identification information of the first partial bone, and performs initial registration of the second partial bone based on the first identification information of the second partial bone and the second identification information of the second partial bone ( Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC). However, other calculation methods may be used without being limited to this. FIG. 2 is a diagram in which vertebral regions, which match each other between the first three-dimensional image and the second three-dimensional image, are connected to each other by arrows.”; Paragraph [0053] “The registration processing unit 13 performs registration processing on images of the vertebral regions, which match each other as shown in FIG. 2, for each combination of vertebral regions. First, the registration processing unit 13 sets a landmark for each vertebral region included in each of the first and second three-dimensional images”).
Regarding claim 24, which claim 23 is incorporated, Karino discloses a corresponding point group obtaining unit configured to obtain a corresponding point group concerning the [second bone group] between the first image and the second image based on initial transformation information obtained by performing the initial registration (Paragraph [0058] “Then, rigid registration processing is performed based on the three-dimensional image of the vertebral region on which registration processing has been performed by using the three landmarks, and the three-dimensional image of the vertebral region of the second three-dimensional image corresponding to the vertebral region. As the rigid registration processing, for example, it is possible to use processing using an iterative closest point (ICP) method.”),
wherein the corresponding point group obtaining unit obtains a corresponding point group concerning the first partial bone and a corresponding point group concerning the second partial bone between the first image and the second image (Paragraph [0057] “First, registration is performed using the landmarks of three points set in each of the three-dimensional image of the vertebral region of the first three-dimensional image and the three-dimensional image of the vertebral region of the second three-dimensional image corresponding to the vertebral region. Specifically, the registration is performed by moving the three-dimensional image of the vertebral region so that the distance between the corresponding landmarks is the shortest.”).
However, Karino fails to teach the second bone group.
Pradhan teaches the second bone group (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include the second bone group taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 24.
Regarding claim 25, which claim 24 is incorporated, Karino discloses wherein the registration unit performs deformation processing using the corresponding point group concerning the first partial bone and the corresponding point group concerning the second partial bone as constraint conditions (Paragraph [0059] “Then, non-rigid registration processing is performed based on the image of the vertebral region on which the rigid registration processing has been performed, and the three-dimensional image of the vertebral region of the second three-dimensional image corresponding to the vertebral region. As the non-rigid registration processing, for example, it is possible to use processing using a free-form deformation (FFD) method or processing using a thin-plate spline (TPS) method. However, other known methods may be used.”), thereby obtaining integrated transformation information in which initial transformation information generated by initial registration of the first partial bone and initial transformation information generated by initial registration of the second partial bone are put together (Paragraph [0060] “…the three registration processes are performed as described above in the present embodiment, only the rigid registration processing and the non-rigid registration processing may be performed. It is possible to perform the registration of the whole subject with high accuracy by performing above described registration processing including matching processing.”), and
performs the second registration based on the integrated transformation information (Paragraph [0061] “Then, the registration processing unit 13 generates a composite image by combining the three-dimensional images of the respective vertebral regions that have been subjected to the three registration processes as described above. Specifically, the composite image is generated by setting an initial value image, which is a three-dimensional image having the same size as the second three-dimensional image and in which all pixel values are zero, and combining the three-dimensional image of each vertebral region of the first three-dimensional image sequentially on the initial value image. The composite image generated by the registration processing unit 13 is output to the difference image generation unit 14.”).
Regarding claim 26, Karino discloses an image processing method comprising (Paragraph [0003] The present invention relates to an image registration device, method, and non-transitory computer readable recording medium storing a program for performing registration between two images obtained by imaging a subject, which is configured to include parts of a plurality of bones, at different points in time.):
obtaining at least one of first identification information that identifies a plurality of first bones depicted in a first image obtained by capturing an image of a subject and second identification information that identifies a plurality of second bones depicted in a second image obtained by capturing an image of the subject (Paragraph [0045] “two three-dimensional images 6 are acquired by imaging the vertebral column of a patient at different points in time, and a difference image between the two three-dimensional images 6 is generated. As the two three-dimensional images 6 captured at different points in time, the three-dimensional image 6 captured in the past and the current three-dimensional image 6 captured this time may be acquired, or the two three-dimensional images 6 captured in the past may be acquired. In the present embodiment, it is assumed that the past three-dimensional image 6 and the current three-dimensional image 6 are acquired, and the past three-dimensional image 6 is referred to as a first three-dimensional image (corresponding to a first image of the invention) and the current three-dimensional image 6 is referred to as a second three-dimensional image (corresponding to a second image of the invention).”; Paragraph [0051] “The identification unit 11 performs processing for identifying a plurality of vertebrae that from the vertebral column included in each of the first three-dimensional image and the second three-dimensional image...”; Figure 2);
performing first registration between the plurality of first bones and the plurality of second bones [classified into the first bone group] and second registration between the plurality of first bones and the plurality of second bones [classified into the second bone group] (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC).”; Paragraph [0053] “The registration processing unit 13 performs registration processing on images of the vertebral regions, which match each other as shown in FIG. 2, for each combination of vertebral regions. First, the registration processing unit 13 sets a landmark for each vertebral region included in each of the first and second three-dimensional images”).
However, Karino fails to teach classifying, using the at least one piece of identification information, the plurality of first bones and the plurality of second bones into a first bone group including a bone that moves in association with a motion of a first part of the subject and a second bone group including a bone that moves in association with a motion of a second part different from the first part.
Pradhan teaches classifying, using the at least one piece of identification information (Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”), the plurality of first bones and the plurality of second bones (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”) into a first bone group including a bone that moves in association with a motion of a first part of the subject and a second bone group including a bone that moves in association with a motion of a second part different from the first part (Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include classifying, using the at least one piece of identification information, the plurality of first bones and the plurality of second bones into a first bone group including a bone that moves in association with a motion of a first part of the subject and a second bone group including a bone that moves in association with a motion of a second part different from the first part taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 26.
Regarding claim 27, Karino discloses an image processing method comprising (Paragraph [0003] The present invention relates to an image registration device, method, and non-transitory computer readable recording medium storing a program for performing registration between two images obtained by imaging a subject, which is configured to include parts of a plurality of bones, at different points in time.):
obtaining identification information that identifies a plurality of bones depicted in each of a first image obtained by capturing an image of a subject and a second image obtained by capturing an image of the subject (Paragraph [0045] “two three-dimensional images 6 are acquired by imaging the vertebral column of a patient at different points in time, and a difference image between the two three-dimensional images 6 is generated. As the two three-dimensional images 6 captured at different points in time, the three-dimensional image 6 captured in the past and the current three-dimensional image 6 captured this time may be acquired, or the two three-dimensional images 6 captured in the past may be acquired. In the present embodiment, it is assumed that the past three-dimensional image 6 and the current three-dimensional image 6 are acquired, and the past three-dimensional image 6 is referred to as a first three-dimensional image (corresponding to a first image of the invention) and the current three-dimensional image 6 is referred to as a second three-dimensional image (corresponding to a second image of the invention).”; Paragraph [0051] “The identification unit 11 performs processing for identifying a plurality of vertebrae that from the vertebral column included in each of the first three-dimensional image and the second three-dimensional image...”; Figure 2);
performing first registration between the plurality of bones depicted in the first image and the plurality of bones depicted in the second image, which are [classified into the first bone group] (Paragraph [0052] “The matching unit 12 matches each vertebral region included in the first three-dimensional image with each vertebral region included in the second three-dimensional image. Specifically, the matching unit 12 calculates a correlation value for a combination of all vertebral regions between the first three-dimensional image and the second three-dimensional image using the pixel value (for example, a CT value) of each vertebral region. Then, in a case where the correlation value is equal to or greater than a threshold value set in advance, the combination of vertebral regions having the correlation value is determined to be a combination for which matching is to be performed. As a method of calculating a correlation value, for example, a correlation value may be calculated using zero-mean normalized cross-correlation (ZNCC).”; Paragraph [0053] “The registration processing unit 13 performs registration processing on images of the vertebral regions, which match each other as shown in FIG. 2, for each combination of vertebral regions. First, the registration processing unit 13 sets a landmark for each vertebral region included in each of the first and second three-dimensional images”).
However, Karino fails to teach classifying, using the identification information, the plurality of bones into a first bone group including a bone that moves in association with a motion of a first part of the subject.
Pradhan teaches classifying, using the identification information ((Introduction [page 1 paragraph 2] “Classification is used to predict the category of data we provide [3]. This classification can be done on binary input meaning 2 categories as well as multiple categories.”), the plurality of bones (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”) into a first bone group including a bone that moves in association with a motion of a first part of the subject (Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”.
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include classifying, using the identification information, the plurality of bones into a first bone group including a bone that moves in association with a motion of a first part of the subject taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results. Therefore, it would have been obvious to combine Pradhan with Karino to obtain the invention specified in claim 27.
Regarding claim 28, which claim 27 is incorporated, Karino discloses storage medium storing a program configured to cause a computer to execute each step of an image processing method (Paragraph [0029] “A non-transitory computer readable recording medium storing an image registration program of the invention causes a computer to function as: an image acquisition unit that acquires first and second images by imaging a subject configured to include parts of a plurality of bones at different points in time; an identification unit that identifies the parts of the plurality of bones included in each of the first and second images; a matching unit that matches a part of each bone included in the first image with a part of each bone included in the second image; and a registration processing unit that performs registration processing between images of the matched parts of the bones.”).
Claims 20 and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Karino (US 2017/0091919 A1) in view of Pradhan et al. ("Classification of human bones using deep convolutional neural network." IOP conference series: materials science and engineering. Vol. 594. No. 1. IOP Publishing, 2019.) (hereinafter, Pradhan) further in view of Wagner (DE 102008/045275 A1).
Regarding claim 20, which claim 1 is incorporated, Karino discloses a determination unit configured to determine whether the [second bone group classified in at least one of the first image] and [the second image includes at least one of a transplant] (Paragraph [0046] “Although the three-dimensional image 6 is acquired by imaging the vertebral column of the patient in the present embodiment, imaging targets (subjects) are not limited to the vertebral column, and any imaging target configured to include parts of a plurality of bones may be imaged.”) and a bone recognition region in which bone extraction reliability does not satisfy a predetermined threshold (Paragraph [0062] “The difference image generation unit 14 generates a difference image by calculating the difference between the composite image generated by the registration processing unit 13 and the second three-dimensional image set as a fixed image. The difference image generated in this manner is an image that highlights a lesion, such as bone metastasis that is not present in the first three-dimensional image captured in the past but is present in the second three-dimensional image captured this time.”).
However, Karino fails to teach the second bone group classified in at least one of the first image.
Pradhan teaches the second bone group classified in at least one of the first image (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to include the second bone group classified in at least one of the first image taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results.
However, Karino and Pradhan fail to teach the second image includes at least one of a transplant.
Wagner teaches the second image includes at least one of a transplant (Abstract “rigid elements are detected in both the image datas. The image datas are segmented for producing segmented image datas (A, B)…The identified rigid elements are individually registered in segmented image data volume (A') assigned to the segmented image datas.”; Paragraph [0006] “Segmenting the first image data to produce first segmented image data containing the rigid elements detected in the first image data, 1.3. Segmenting the second image data to generate second segmented image data including the rigid elements detected in the second image data,”; Paragraph [0011] “rigid elements or rigid objects, for example, individual bones, individual Teeth, or implants,”.).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino in view of Pradhan to include the second image includes at least one of a transplant taught by Wagner’s reference. The motivation for doing so would have been to accurately register the image data as suggested by Pradhan (see Wagner, Paragraph [0036]).
Further, one skilled in the art could have combined the elements described above by known
methods with no change to the respective functions, and the combination would have yielded nothing
more that predictable results. Therefore, it would have been obvious to combine Wagner with Karino
and Pradhan to obtain the invention specified in claim 20.
Regarding claim 21, which claim 20 is incorporated, Karino discloses wherein using the identification information obtained by the obtaining unit and information about a result of the determination, the [classification unit reclassifies] [a bone belonging to the second bone group determined to include at least one of the transplant] ( Paragraph [0046] “Although the three-dimensional image 6 is acquired by imaging the vertebral column of the patient in the present embodiment, imaging targets (subjects) are not limited to the vertebral column, and any imaging target configured to include parts of a plurality of bones may be imaged.”) and the bone recognition region in which the bone extraction reliability does not satisfy the predetermined threshold to the [first bone group] (Paragraph [0062] “The difference image generation unit 14 generates a difference image by calculating the difference between the composite image generated by the registration processing unit 13 and the second three-dimensional image set as a fixed image. The difference image generated in this manner is an image that highlights a lesion, such as bone metastasis that is not present in the first three-dimensional image captured in the past but is present in the second three-dimensional image captured this time.”).
However, Karino fails to the classification unit and the first bone group.
Pradhan teaches the classification unit and the first bone group (Dataset [page 1 paragraph 1] “For the classification of human bones, we require some x-ray images of the different body part. For that region, we used Musculoskeletal Radiographs (MURA) dataset.”); Dataset [page 1 paragraph 1] “This MURA dataset contains seven different categories of human bones belonging to Elbow, Hand, Wrist, Shoulder, Finger, Humorous and Forearm.”).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino’s reference to the classification unit and the first bone group taught by Pradhan’s reference. The motivation for doing so would have been to correctly identify human bones as suggested by Pradhan (see Pradhan, Abstract).
Further, one skilled in the art could have combined the elements described above by known methods with no change to the respective functions, and the combination would have yielded nothing more that predictable results.
However, Karino and Pradhan fail to a bone belonging to the second bone group determined to include at least one of the transplant.
Wagner teaches a bone belonging to the second bone group determined to include at least one of the transplant (Abstract “rigid elements are detected in both the image datas. The image datas are segmented for producing segmented image datas (A, B)…The identified rigid elements are individually registered in segmented image data volume (A') assigned to the segmented image datas.”; Paragraph [0006] “Segmenting the first image data to produce first segmented image data containing the rigid elements detected in the first image data, 1.3. Segmenting the second image data to generate second segmented image data including the rigid elements detected in the second image data,”; Paragraph [0011] “rigid elements or rigid objects, for example, individual bones, individual Teeth, or implants,”.).
Therefore, it would have been obvious to one of ordinary skill of the art before the effective
filing date to modify Karino in view of Pradhan to include a bone belonging to the second bone group determined to include at least one of the transplant taught by Wagner’s reference. The motivation for doing so would have been to accurately register the image data as suggested by Pradhan (see Wagner, Paragraph [0036]).
Further, one skilled in the art could have combined the elements described above by known
methods with no change to the respective functions, and the combination would have yielded nothing
more that predictable results. Therefore, it would have been obvious to combine Wagner with Karino and Pradhan to obtain the invention specified in claim 21.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Fujimoto et al. (US 10,839,550 B2) discloses a method to recognize body parts by acquiring three-dimensional information of observation points on an object from a sensor, specifies a first area and second areas adjacent to the first area from areas of the object based on the information and specifying the position of the feature points.
Hirakawa (US 2020/0058098 A1) discloses an apparatus that includes an image acquisition unit that acquires a first image and a second image acquired by capturing images of a plurality of bones at different times and a registration processing image which performs a registration process for the plurality of bone parts in one of the first image and the second image.
Onal et al. ("MRI-based segmentation of pubic bone for evaluation of pelvic organ prolapse." IEEE Journal of Biomedical and Health Informatics 18.4 (2014): 1370-1378.) discloses an MRI- based segmentation process for automating pelvic bone point identification on MRI. The accuracy is improved by using texture features to provide information on the relative position between any two pixels, the texture varies depending on the area of the body being images.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to UROOJ FATIMA whose telephone number is (571)272-2096. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/UROOJ FATIMA/Examiner, Art Unit 2676
/Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676