DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114 was filed in this application after appeal to the Patent Trial and Appeal Board, but prior to a decision on the appeal. Since this application is eligible for continued examination under 37 CFR 1.114 and the fee set forth in 37 CFR 1.17(e) has been timely paid, the appeal has been withdrawn pursuant to 37 CFR 1.114 and prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/16/2025 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1, 5-14 and 16-18 in Applicant’s responses filed 12/16/2025 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Newly found prior art Mohr, B., US 20120219198 A1, has been incorporated with the teachings of Visser to arrive at the claimed invention.
Therefore, the claims stand rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 6, 8-10 13, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Visser et al., US 20160019701 A1 in view of Mohr, B., US 20120219198 A1.
Regarding claim 1, Visser teaches a control apparatus (tomographic image generating system 100 of reproduced fig. 1 below and [0041]) comprising:
at least one processor (see CPU of [0064]), wherein the at least one processor is configured to:
acquire a distance image from a distance measurement camera that captures a distance image representing a distance from the distance measurement camera to an imaging target that exists in an imaging region of a radiography apparatus that captures a radiographic image with the radiation emitted from a radiation source ([0061] states that “The three-dimensional camera 70 functions as an acquiring member to radiograph the subject H in the irradiating direction of the radiation source 61, acquire a two-dimensional image (two-dimensional geometric image) and a distance image of the subject H, and output the images to the console 90. The distance image is an image representing a distribution of a distance from the three-dimensional camera 70 at individual positions in the imaging range of a two-dimensional image.”),
specify whether or not a structure that differs from a subject for imaging is present in the imaging region of a radiography apparatus that captures a radiographic image with the radiation emitted from a radiation source ([0080] states that “The control section 91 acquires the thickness of the subject H based on the distance image captured with the three-dimensional camera 70 (Step S3). For example, the control section 91 measures the distance from the three-dimensional camera 70 to the subject table 54 in advance and stores it in the storage section 95. The control section 91 then calculates a differential value between the distance from the three-dimensional camera 70 to the subject table 54 and the distance from the three-dimensional camera 70 to each position (the x and y coordinates of each dot) in the imaging range. The control section 91 takes the differential value (more than zero) as the thickness of the subject H at each position and acquires a distribution of the thicknesses of the subject H (the thickness of the subject H at each position on the surface (xy-plane) irradiated with radiation)”),
set an imaging region excluding the structure as the imaging region before initial irradiation of the radiation is performed from the radiation source with respect to the subject in a case where the structure is present ([0089] states that “The irradiation field is a range in which the radiation source 61 radiates radiation, and can be limited with the collimator 75. For example, the control section 91 defines an area in which the thickness of the subject is more than zero (i.e. an area where the subject H exists) as a subject area within the imaging range of the three-dimensional camera 70. The control section 91 determines an area, which is inside a rectangle circumscribing the subject area, to be an irradiation field. This configuration can automatically determine the optimal irradiation field without manual adjustment by the user”. The defined irradiation field excludes regions where the thickness is zero, that is, including areas in the distance image occupied by the subject table 54),
control a collimator that adjusts an irradiation field of the radiation emitted from the radiation source, such that the irradiation field corresponds to the imaging region excluding the structure ([0089] states that “The irradiation field is a range in which the radiation source 61 radiates radiation, and can be limited with the collimator 75”), and
control the radiography apparatus to image the imaging region ([0097] states that “The control section 91 performs the tomosynthesis imaging under the determined imaging conditions (Step S5)”).
PNG
media_image1.png
674
542
media_image1.png
Greyscale
Visser fails to teach that the specifying of the presence of the structure is based on a region of pixels in the distance image that has a specific shape of the structure.
However, within the same field of endeavor, Mohr teaches selecting image data representative of a subject from an image data set comprising determining regions of image data, wherein each region of image data consists of a respective plurality of connected voxels, and selecting at least one region as being representative of the subject based upon at least one of the size and shape of the region (see abstract). Reproduced fig. 2 (paragraph 23) below depicts steps to the effect of determining image data representative of the subject. The method of fig. 2 further includes a stage 28, where according to paragraph 39, the regions identified as being representative of the patient are selected out, leaving those regions that are above the intensity threshold but that have not been identified as representing the patient. A further process is then performed to identify which, if any, of the remaining regions represent the table. That further process uses a geometrical classifier to identify regions that may represent the table or parts of the table. The regions associated with the table are highlighted according to paragraph 45. Paragraph 46 then stats that “The approach taken at stage 28 takes advantage of the fact that measurement table components have been found usually to be smaller and/or have a higher ratio of perimeter to filled interior size than a human or animal body or other subject. Even in the case of a relatively large table, the only above-threshold voxels may be present in the frame of the table, and the interior of the table will usually comprise below-threshold voxels that will have been discarded at the start of the process. Thus, the measure of the filled volume or area of the table (for example, the number of above-threshold voxels included in the table region) will usually be low relative to the measure of the perimeter. In some cases, once the below-threshold voxels have been discarded the frame of the table will comprise a plurality of separate regions (for example a plurality of separate rod shapes). Even in those cases, it has been found that the measure of the filled volume or area of each region is low relative to the measure of the perimeter of the region, in comparison to the whole or even individual parts (for example tip of nose, ears, fingertips) of a human or animal body”.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser such that the specifying of the presence of the structure is based on a region of pixels in the distance image that has a specific shape of the structure, as taught by Mohr, as such modification would improve results of further post-acquisition image processing such as registration and segmentation of the region of interest (see paragraph 6).
Regarding claim 6, Visser in view of Mohr teaches all the limitations of claim 1 above.
Visser further teaches wherein the at least one processor (see CPU of [0064]) is configured to specify that the structure is present in a case where a structure distance image corresponding to the specific shape is detected from the distance image based on the distance ([0080] states that “the control section 91 measures the distance from the three-dimensional camera 70 to the subject table 54 in advance and stores it in the storage section 95. The control section 91 then calculates a differential value between the distance from the three-dimensional camera 70 to the subject table 54 and the distance from the three-dimensional camera 70 to each position (the x and y coordinates of each dot) in the imaging range.”).
Regarding claim 8, Visser in view of Mohr teaches all the limitations of claim 1 above.
Visser further teaches wherein: the distance measurement camera is a visible light image capturing apparatus and the distance image is a visible light image; and the at least one processor (see CPU of [0064]) is configured to: acquire the visible light image obtained by imaging the imaging region with the visible light image capturing apparatus ([0061] states that “The three-dimensional camera 70 is disposed in the vicinity of the radiation source 61 and is opposite to the surface of the subject table 54 on which the subject H is disposed. For example, the three-dimensional camera 70 includes a visible-light sensor(s) and an infrared sensor(s) arrayed two-dimensionally, which are opposite to the surface of the subject table 54 on which the subject H is disposed. The three-dimensional camera 70 functions as an acquiring member to radiograph the subject H in the irradiating direction of the radiation source 61, acquire a two-dimensional image (two-dimensional geometric image) and a distance image of the subject H, and output the images to the console 90.”), and specify a structure image representing the structure included in the radiographic image based on a shape detected from the visible light image and the distance ([0102] states that “If a reconstructable range in the thickness direction (z-axis direction) of the subject H is a range shown with the grid pattern of FIG. 5A, the control section 91 removes an area containing no subject H (as shown by dots in FIG. 5A) in the reconstructable range, from the reconstruction range, on the basis of the distribution of the thicknesses of the subject H….If the reconstructable range in the surface (xy-plane) of the subject H irradiated with radiation is a range shown by the grid pattern in FIG. 5B, the control section 91 specifies an area containing the subject H (e.g. an area where the thickness of the subject H is more than zero) based on the distribution of the thicknesses of the subject H, and removes the rest of the range containing no subject H (as shown by dots in FIG. 5B) from the reconstruction range.” Hence the distance image that captures the thickness of the subject used to define the irradiation field ([0080]-[0082]), captures regions of the table 54 that are eventually removed by the controller based on the surface or shape features of the table, that is, the table comprises areas with thickness of zero).
Regarding claim 9, Visser in view of Mohr teaches all the limitations of claim 1 above.
Visser further teaches wherein: the distance measurement camera is a visible light image capturing apparatus and the distance image is a visible light image; and the at least one processor (see CPU of [0064]) is configured to: acquire the visible light image obtained by imaging the imaging region with the visible light image capturing apparatus ([0061] states that “The three-dimensional camera 70 is disposed in the vicinity of the radiation source 61 and is opposite to the surface of the subject table 54 on which the subject H is disposed. For example, the three-dimensional camera 70 includes a visible-light sensor(s) and an infrared sensor(s) arrayed two-dimensionally, which are opposite to the surface of the subject table 54 on which the subject H is disposed. The three-dimensional camera 70 functions as an acquiring member to radiograph the subject H in the irradiating direction of the radiation source 61, acquire a two-dimensional image (two-dimensional geometric image) and a distance image of the subject H, and output the images to the console 90.”), and specify that the structure is present in a case where a structure visible light image corresponding to the specific shape is detected from the visible light image([0102] states that “If a reconstructable range in the thickness direction (z-axis direction) of the subject H is a range shown with the grid pattern of FIG. 5A, the control section 91 removes an area containing no subject H (as shown by dots in FIG. 5A) in the reconstructable range, from the reconstruction range, on the basis of the distribution of the thicknesses of the subject H….If the reconstructable range in the surface (xy-plane) of the subject H irradiated with radiation is a range shown by the grid pattern in FIG. 5B, the control section 91 specifies an area containing the subject H (e.g. an area where the thickness of the subject H is more than zero) based on the distribution of the thicknesses of the subject H, and removes the rest of the range containing no subject H (as shown by dots in FIG. 5B) from the reconstruction range.” Hence the distance image that captures the thickness of the subject used to define the irradiation field ([0080]-[0082]), captures regions of the table 54 that are eventually removed by the controller based on the surface or shape features of the table, that is, the table comprises areas with thickness of zero).
Regarding claim 10, Visser in view of Mohr teaches all the limitations of claim 1 above.
Mohr further teaches wherein the structure consists of metal (paragraph 32 states that “in the case of some CT tables, where the bulk of the table is formed of foam or other non-absorbing material, voxels representative of the foam or other non-absorbing material are discarded during the initial thresholding process and only voxels representative of a metal or other frame of the table are retained. For such CT tables, the patient region will be separate from the table region even without the morphological process”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser wherein the structure consists of metal, as taught by Mohr, as such modification would improve results of further post-acquisition image processing such as registration and segmentation of the region of interest (see paragraph 6).
Regarding claim 13 Visser in view of Mohr teaches all the limitations of claim 1 above.
Visser further teaches wherein the at least one processor is configured to acquire the radiographic image by imaging the imaging region, in which the subject is present, via the radiography apparatus, and execute image processing on the radiographic image ([0110] states that “The control section 91 generates reconstructed images (tomographic images) of the subject H based on the projected images stored in the projected image storage section 951, and stores them in connection with the patient information in the reconstructed image storage section 952 (Step S7)”).
Regarding claim 16, Visser in view of Mohr teaches all the limitations of claim 1 above.
Visser teaches a radiography system comprising: a radiography apparatus that captures a radiographic image of a subject; and the control apparatus according to claim 1 ([0041] states that “The skeleton framework of a tomographic image generating system 100 according to a first embodiment will now be described. The tomographic image generating system 100 generates a tomographic image of a subject H (a part of the human body) by reconstructing projected images acquired through the tomosynthesis imaging of the subject H.”).
Regarding claim 17, Visser teaches a control processing method (see reproduced fig. 4 below),
wherein a computer (see CPU of [0064]) executes processing of:
acquiring a distance image from a distance measurement camera that captures a distance image representing a distance from the distance measurement camera to an imaging target that exists in an imaging region of a radiography apparatus that captures a radiographic image with the radiation emitted from a radiation source ([0061] states that “The three-dimensional camera 70 functions as an acquiring member to radiograph the subject H in the irradiating direction of the radiation source 61, acquire a two-dimensional image (two-dimensional geometric image) and a distance image of the subject H, and output the images to the console 90. The distance image is an image representing a distribution of a distance from the three-dimensional camera 70 at individual positions in the imaging range of a two-dimensional image.”),
specifying whether or not a structure that differs from a subject for imaging is present in the imaging region of a radiography apparatus that captures a radiographic image with the radiation emitted from a radiation source ([0080] states that “The control section 91 acquires the thickness of the subject H based on the distance image captured with the three-dimensional camera 70 (Step S3). For example, the control section 91 measures the distance from the three-dimensional camera 70 to the subject table 54 in advance and stores it in the storage section 95. The control section 91 then calculates a differential value between the distance from the three-dimensional camera 70 to the subject table 54 and the distance from the three-dimensional camera 70 to each position (the x and y coordinates of each dot) in the imaging range. The control section 91 takes the differential value (more than zero) as the thickness of the subject H at each position and acquires a distribution of the thicknesses of the subject H (the thickness of the subject H at each position on the surface (xy-plane) irradiated with radiation)”),
setting an imaging region excluding the structure as the imaging region before initial irradiation of the radiation is performed from the radiation source with respect to the subject in a case where the structure is present([0089] states that “The irradiation field is a range in which the radiation source 61 radiates radiation, and can be limited with the collimator 75. For example, the control section 91 defines an area in which the thickness of the subject is more than zero (i.e. an area where the subject H exists) as a subject area within the imaging range of the three-dimensional camera 70. The control section 91 determines an area, which is inside a rectangle circumscribing the subject area, to be an irradiation field. This configuration can automatically determine the optimal irradiation field without manual adjustment by the user”. The defined irradiation field excludes regions where the thickness is zero, that is, including areas in the distance image occupied by the subject table 54),
controlling a collimator that adjusts an irradiation field of the radiation emitted from the radiation source, such that the irradiation field corresponds to the imaging region excluding the structure ([0089] states that “The irradiation field is a range in which the radiation source 61 radiates radiation, and can be limited with the collimator 75”), and
controlling the radiography apparatus to image the imaging region([0097] states that “The control section 91 performs the tomosynthesis imaging under the determined imaging conditions (Step S5)”).
PNG
media_image2.png
604
458
media_image2.png
Greyscale
Visser fails to teach that the specifying of the presence of the structure is based on a region of pixels in the distance image that has a specific shape of the structure.
However, within the same field of endeavor, Mohr teaches a method of selecting image data representative of a subject from an image data set comprising determining regions of image data, wherein each region of image data consists of a respective plurality of connected voxels, and selecting at least one region as being representative of the subject based upon at least one of the size and shape of the region (see abstract). Reproduced fig. 2 (paragraph 23) below depicts steps to the effect of determining image data representative of the subject. The method of fig. 2 further includes a stage 28, where according to paragraph 39, the regions identified as being representative of the patient are selected out, leaving those regions that are above the intensity threshold but that have not been identified as representing the patient. A further process is then performed to identify which, if any, of the remaining regions represent the table. That further process uses a geometrical classifier to identify regions that may represent the table or parts of the table. The regions associated with the table are highlighted according to paragraph 45. Paragraph 46 then stats that “The approach taken at stage 28 takes advantage of the fact that measurement table components have been found usually to be smaller and/or have a higher ratio of perimeter to filled interior size than a human or animal body or other subject. Even in the case of a relatively large table, the only above-threshold voxels may be present in the frame of the table, and the interior of the table will usually comprise below-threshold voxels that will have been discarded at the start of the process. Thus, the measure of the filled volume or area of the table (for example, the number of above-threshold voxels included in the table region) will usually be low relative to the measure of the perimeter. In some cases, once the below-threshold voxels have been discarded the frame of the table will comprise a plurality of separate regions (for example a plurality of separate rod shapes). Even in those cases, it has been found that the measure of the filled volume or area of each region is low relative to the measure of the perimeter of the region, in comparison to the whole or even individual parts (for example tip of nose, ears, fingertips) of a human or animal body”.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser such that the specifying of the presence of the structure is based on a region of pixels in the distance image that has a specific shape of the structure, as taught by Mohr, as such modification would improve results of further post-acquisition image processing such as registration and segmentation of the region of interest (see paragraph 6).
Regarding claim 18, Visser teaches a non-transitory computer-readable storage medium storing a control processing program ([0064] states that “The CPU of the control section 91 reads various programs such as system programs and processing programs stored in the storage section 95 and loads them onto the RAM. Under instruction of the loaded programs, the CPU executes a reconstructed image generating process A and other processes described later.”) causing a computer to execute processing of:
acquiring a distance image from a distance measurement camera that captures a distance image representing a distance from the distance measurement camera to an imaging target that exists in an imaging region of a radiography apparatus that captures a radiographic image with the radiation emitted from a radiation source ([0061] states that “The three-dimensional camera 70 functions as an acquiring member to radiograph the subject H in the irradiating direction of the radiation source 61, acquire a two-dimensional image (two-dimensional geometric image) and a distance image of the subject H, and output the images to the console 90. The distance image is an image representing a distribution of a distance from the three-dimensional camera 70 at individual positions in the imaging range of a two-dimensional image.”),
specifying whether or not a structure that differs from a subject for imaging is present in the imaging region of a radiography apparatus that captures a radiographic image with the radiation emitted from a radiation source ([0080] states that “The control section 91 acquires the thickness of the subject H based on the distance image captured with the three-dimensional camera 70 (Step S3). For example, the control section 91 measures the distance from the three-dimensional camera 70 to the subject table 54 in advance and stores it in the storage section 95. The control section 91 then calculates a differential value between the distance from the three-dimensional camera 70 to the subject table 54 and the distance from the three-dimensional camera 70 to each position (the x and y coordinates of each dot) in the imaging range. The control section 91 takes the differential value (more than zero) as the thickness of the subject H at each position and acquires a distribution of the thicknesses of the subject H (the thickness of the subject H at each position on the surface (xy-plane) irradiated with radiation)”),
setting an imaging region excluding the structure as the imaging region before initial irradiation of the radiation is performed from the radiation source with respect to the subject in a case where the structure is present([0089] states that “The irradiation field is a range in which the radiation source 61 radiates radiation, and can be limited with the collimator 75. For example, the control section 91 defines an area in which the thickness of the subject is more than zero (i.e. an area where the subject H exists) as a subject area within the imaging range of the three-dimensional camera 70. The control section 91 determines an area, which is inside a rectangle circumscribing the subject area, to be an irradiation field. This configuration can automatically determine the optimal irradiation field without manual adjustment by the user”. The defined irradiation field excludes regions where the thickness is zero, that is, including areas in the distance image occupied by the subject table 54),
controlling a collimator that adjusts an irradiation field of the radiation emitted from the radiation source, such that the irradiation field corresponds to the imaging region excluding the structure ([0089] states that “The irradiation field is a range in which the radiation source 61 radiates radiation, and can be limited with the collimator 75”), and
controlling the radiography apparatus to image the imaging region([0097] states that “The control section 91 performs the tomosynthesis imaging under the determined imaging conditions (Step S5)”).
Visser fails to teach that the specifying of the presence of the structure is based on a region of pixels in the distance image that has a specific shape of the structure.
However, within the same field of endeavor, Mohr teaches a method of selecting image data representative of a subject from an image data set comprising determining regions of image data, wherein each region of image data consists of a respective plurality of connected voxels, and selecting at least one region as being representative of the subject based upon at least one of the size and shape of the region (see abstract). Reproduced fig. 2 (paragraph 23) below depicts steps to the effect of determining image data representative of the subject. The method of fig. 2 further includes a stage 28, where according to paragraph 39, the regions identified as being representative of the patient are selected out, leaving those regions that are above the intensity threshold but that have not been identified as representing the patient. A further process is then performed to identify which, if any, of the remaining regions represent the table. That further process uses a geometrical classifier to identify regions that may represent the table or parts of the table. The regions associated with the table are highlighted according to paragraph 45. Paragraph 46 then stats that “The approach taken at stage 28 takes advantage of the fact that measurement table components have been found usually to be smaller and/or have a higher ratio of perimeter to filled interior size than a human or animal body or other subject. Even in the case of a relatively large table, the only above-threshold voxels may be present in the frame of the table, and the interior of the table will usually comprise below-threshold voxels that will have been discarded at the start of the process. Thus, the measure of the filled volume or area of the table (for example, the number of above-threshold voxels included in the table region) will usually be low relative to the measure of the perimeter. In some cases, once the below-threshold voxels have been discarded the frame of the table will comprise a plurality of separate regions (for example a plurality of separate rod shapes). Even in those cases, it has been found that the measure of the filled volume or area of each region is low relative to the measure of the perimeter of the region, in comparison to the whole or even individual parts (for example tip of nose, ears, fingertips) of a human or animal body”.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser such that the specifying of the presence of the structure is based on a region of pixels in the distance image that has a specific shape of the structure, as taught by Mohr, as such modification would improve results of further post-acquisition image processing such as registration and segmentation of the region of interest (see paragraph 6).
PNG
media_image3.png
766
592
media_image3.png
Greyscale
Claims 5 and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over Visser in view of Mohr, as applied to claim 1 above, and further in view of Imamura, et al, US 20190046134.
Regarding claim 5, Visser in view of Mohr teaches all the limitations of claim 1 above.
Visser in view of Mohr fails to teach wherein the distance image capturing apparatus captures the distance image using a time-of-flight (TOF) system.
However, Imamura further teaches wherein the distance image capturing apparatus captures the distance image using a time-of-flight (TOF) system (see paragraph 79).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser as modified by Mohr, such that the distance image capturing apparatus captures the distance image using a time-of-flight (TOF) system, as taught by Imamura, as this would provide an easier way to capture images of the subject in situations when the subject may be incapacitated such as emergency room (ER) situations (see paragraphs 13-14).
Regarding claim 8, Visser in view of Mohr teaches all the limitations of claim 1 above.
Visser in view of Mohr fails to teach wherein the processor is configured to acquire a visible light image obtained by imaging the imaging region with a visible light image capturing apparatus, and specify a structure image representing the structure included in the radiographic image based on a shape detected from the visible light image and the distance.
However, Imamura further teaches wherein the processor is configured to acquire a visible light image obtained by imaging the imaging region with a visible light image capturing apparatus, and specify a structure image representing the structure included in the radiographic image based on a shape detected from the visible light image and the distance (see optical camera 46 of paragraph 66 for capturing the camera image 120 (paragraph 120) for determining the corners of the cassette (paragraph 120) and marker 154).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser as modified by Mohr, such that the processor is configured to acquire a visible light image obtained by imaging the imaging region with a visible light image capturing apparatus, and specify a structure image representing the structure included in the radiographic image based on a shape detected from the visible light image and the distance, as taught by Imamura, as this would provide an easier way to capture images of the subject in situations when the subject may be incapacitated such as emergency room (ER) situations (see paragraphs 13-14).
Regarding claim 9, Visser in view of Mohr teaches all the limitations of claim 1 above.
Visser in view of Mohr fails to teach wherein the processor is configured to acquire a visible light image obtained by imaging the imaging region with a visible light image capturing apparatus, and specify that the structure is present in a case where a structure visible light image corresponding to the specific shape is detected from the visible light image.
However, Imamura further teaches wherein the processor is configured to acquire a visible light image obtained by imaging the imaging region with a visible light image capturing apparatus, and specify that the structure is present in a case where a structure visible light image corresponding to the specific shape is detected from the visible light image (see optical camera 46 of paragraph 66 for capturing the camera image 120 (paragraph 120) for determining the corners of the cassette (paragraph 120) and marker 154).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser, as modified by Mohr, such that the processor is configured to acquire a visible light image obtained by imaging the imaging region with a visible light image capturing apparatus, and specify that the structure is present in a case where a structure visible light image corresponding to the specific shape is detected from the visible light image, as taught by Imamura, as this would provide an easier way to capture images of the subject in situations when the subject may be incapacitated such as emergency room (ER) situations (see paragraphs 13-14).
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Visser in view of Mohr and Imamura, as applied to claim 6 above, and further in view of Eichler, et al., US 20140275998.
Regarding claim 7, Visser in view of Mohr and Imamura teaches all the limitations of claim 6.
Visser in view of Mohr and Imamura fail to teach wherein the processor is configured to detect the structure distance image based on a learned model learned in advance using a plurality of the distance images with the structure in the imaging region as the imaging target.
However, Eichler teaches a method of an image based navigation of a medical device within a body (see fig. 7 and paragraph 49), wherein the processor (paragraphs 47-48) is configured to detect the structure distance image based on a learned model learned in advance using a plurality of the distance images with the structure in the imaging region as the imaging target (see paragraph 49 for the neural network algorithm “to generate accurate outputs of the position of emitter 26 and/or detector 28 and the distance between emitter 26 and detector 28 responsive to inputs from a group of sensors”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser, as modified by Mohr and Imamura, such that the processor is configured to detect the structure distance image based on a learned model learned in advance using a plurality of the distance images with the structure in the imaging region as the imaging target, as taught by Eichler, as such accurate outputs of the distance improves the accurate depiction of the region of interest (see paragraphs 5-6)
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Visser in view of Mohr, as applied to claim 1 above, and further in view of Matsumoto, Y. JP2006198157 (disclosed in IDS dated 06/03/2021).
Regarding claim 11, Visser in view of Mohr teaches all the limitations of claim 1.
Visser in view of Mohr does not teach wherein the structure is a wheelchair.
However, Matsumoto teaches radiographic imaging of a patient supported by a wheelchair (see fig. A).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser, as modified by Mohr, such that the patient to be imaged is supported on wheelchair, as taught by Matsumoto, hence reducing the burden of ambulating the patient during the imaging study (see fourth paragraph).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Visser in view of Mohr, as applied to claim 1 above, and further in view of Lee, et al., US 20150282774.
Regarding claim 12, Visser in view of Mohr teaches all the limitations of claim 1.
Visser in view of Mohr does not teach wherein the structure is a stretcher.
However, Lee teaches a CT scanner 1200 for imaging a patient, the patient configured to lie on a stretcher that allows translations to image the subject (see paragraph 81).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser as modified by Mohr, such that the structure is a stretcher, as taught by Lee, as such modification would allow easier access to the regions of interest of the subject and hence reduce instances of artifacts in the image acquisition (see paragraph 81).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Visser in view of Mohr, as applied to claim 13 above, and further in view of Kim, et al, US 20150190107.
Regarding claim 14, Visser in view of Mohr teaches all the limitations of claim 13.
Visser in view of Mohr fails to teach wherein the image processing is contrast enhancement processing.
However, Kim teaches an image processor is configured to extract distinct points from at least one of the first diagnostic image and the second diagnostic image and perform image registration the first and second diagnostic images based on the extracted distinct points (see abstract) wherein the image processing is contrast enhancement processing (see paragraphs 105-106).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to configure Visser, as modified by Mohr, such that the image processing is contrast enhancement processing, as taught by Kim, improving the diagnostic quality of the image (paragraph 105-106).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Farouk A Bruce whose telephone number is (408)918-7603. The examiner can normally be reached Mon-Fri 8-5pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Koharski can be reached on (571) 272-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FAROUK A BRUCE/ Examiner, Art Unit 3793