DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Election/Restrictions
Claim 11 is withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected species, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 02/09/2026.
Claim Objections
Claims 1, 3, 5-6, and 21-22 are objected to because of the following informalities:
Claims 1, 21, and 22 recite the limitation “the anatomical structure model of at least a portion of the target subject”. The limitation should read “the anatomical structure model of at least the portion of the target subject” since the claim previously sets forth an anatomical structure model of at least a portion of the target subject.
Claim 3 recites the limitation “the user interface”, however, no user interface has been previously recited in claim 1. The limitation should read –a user interface— for purposes of proper antecedent basis.
Claim 5 recites the limitation “the contrast agent”. The limitation should read –a contrast agent— for purposes of proper antecedent basis.
Claim 5 recites the limitation ”the second imaging device”. The limitation should read –a second imaging device—for purposes of proper antecedent basis.
Claim 6 recites the limitation “the first image”. The limitation should read –a first image—for purposes of proper antecedent basis.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
The limitation “imaging device” in claims 1, 21, and 22 meets all 3 prongs of the analysis set forth in MPEP § 2181 (I). The limitation meets prong (A) because “device” is a generic placeholder for “means”. The limitation meets prong (B) because the generic placeholder (the “device”) is modified by functional language (“imaging” and “acquired”). The limitation meets prong (C) because this claim element is not further modified by sufficient structure or material for performing the claimed function. Examiner notes that imaging is a functional modifier to the claimed device and “acquired by an imaging device” as recited provides functional language of acquiring one or more images.
A review of the specification shows that a camera (e.g., a digital color camera, 3D camera, etc.), a red, green, and blue (RGB) sensor, a depth sensor, an RGB depth (RGB-D) sensor, a thermal sensor (e.g., an infrared (FIR) or near-infrared (NIR) sensor), a radar sensor, and/or other types of image capture circuits configured to generate images (e.g., 2D images or photos) of a human, an object, a scene, or the like, or a combination thereof ([0049]) appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation.
The limitation “second imaging device” in claims 4-5, 9-10, 13-15, and 18 meets all 3 prongs of the analysis set forth in MPEP § 2181 (I). The limitation meets prong (A) because “device” is a generic placeholder for “means”. The limitation meets prong (B) because the generic placeholder (the “device”) is modified by functional language (“imaging”, “to perform…”, “for performing”). The limitation meets prong (C) because this claim element is not further modified by sufficient structure or material for performing the claimed function. Examiner notes that imaging is a functional modifier to the claimed device.
A review of the specification shows that a digital subtraction angiography (DSA) device(the digital subtraction angiography (DSA) device, including the C-arm and/or rack, etc.), a computed radiography (CR) system, a digital radiography (DR) system, a computed tomography (CT) device, an ultrasound imaging device, a fluoroscopy imaging device, a magnetic resonance imaging (MRI) device, or the like, or any combination thereof ([0051])appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4-8 and 10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 4 recites the limitation “a target portion of the target subject”. It is unclear if the target portion is the same as the portion of the target subject recited in claim 1 or if this is a different portion. For examination purposes, it has been interpreted to mean any portion, however, clarification is required.
Claims 5 and 10 recite the limitation ”the target portion of the subject”. There is insufficient antecedent basis for the limitation. It is therefore unclear if the claim is intending to refer back to the portion of the target subject of claim 1 or if this is a different portion of the target subject. For examination purposes, it has been interpreted to mean any portion, however, clarification is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim 1-4, 9-10, 13-15, 18, and 20-22 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Foreign Sun et al. (CN 111789619 A), hereinafter Sun. Examiner notes that citations to Sun are with respect to the translated copy provided herein.
Regarding claims 1 and 21-22,
Sun discloses a system, comprising:
at least one storage medium (at least fig. 1 (130) and corresponding disclosure in at least pg. 24. See also pg. 21 which discloses the module, unit, or block described herein may be implemented as software and/or hardware, and may be stored in any type of non-transitory computer readable medium or another storage device) including a set of instructions (pg. 24 which discloses the storage device 1340 may store data and/or instructions executable by the processing device 120);
at least one processor (at least fig. 1 (120) and corresponding disclosure in at least pg. 24) in communication with the at least one storage medium, wherein when executing the set of instructions, the at least one processor is directed to cause the system to perform operations (pg. 24 which discloses the storage device may store data and/or instructions executable by the processing device 120 or used to perform the exemplary method described in the present application) including:
obtaining one or more images of a target subject acquired by an imaging device (at least fig. 21 (2110) and corresponding disclosure in at least pg. 63 and at least fig. 1 (160) and corresponding disclosure in at least pg. 25 which discloses image acquisition device 160 may be used to acquire the depth image data of the target object and the image data acquired by the image acquisition device 160 may be transmitted to the processing device 120 for further analysis);
determining a three-dimensional (3D) geometric model of the target subject based on the one or more images (at least fig. 23 (2302) and corresponding disclosure in at least pg. 64 and pg. 57 which discloses the processing device 120 may generate object model (e.g., 3D object model) representing the target object based on the image data);
obtaining an anatomical structure model of at least a portion of the target subject (at least fig. 23 (2304) and corresponding disclosure in at least pg. 64 and pg. 15 which discloses obtaining reference image data representing the internal structure of the target object. Examiner notes that the reference image data is considered an anatomical structure model in its broadest reasonable interpretation);
obtaining a combination model by combining the 3D geometric model of the target subject and the anatomical structure model of at least a portion of the target subject (at least fig. 21 (2130) and corresponding disclosure in at least pg. 63 which discloses the processing device may be based on the object model and reference image data to generate a reference object model of the target object (e.g. by combining the object model and the reference image data. See also pg. 57 which discloses the processing device 120 may generate a reference object model representing the internal structure of the target object by combining the reference image data and the object model. See also fig. 4 (406) in which the reference object model of the target object is generated and See also [0065] disclosing the image processing operations to combine the object model and reference image data)
and determining one or more scanning parameters of the target subject based on the combination model (at least fig. 4 (407-409) and corresponding disclosure in at least pg. 30. See also pg. 16 which discloses include causing the first medical imaging device to scan the target object based on the reference object mode and pg. 64 which discloses the user can through the terminal device, for example drawing the area corresponding to the scanning area on the reference object model of the display, to annotation the area corresponding to the scanning area and the area corresponding to the scan area may be semi-automatically determined by the processing device 120 based on the image segmentation algorithm and the information provided by the user, then the processing device can make one or more components of the first medical imaging device (e.g. a C-shaped arm) to adjust their respective position, such that the scanning area is imaged by the first medical imaging device as a target. In addition, the processing device 120 may cause the first medical imaging device to scan the scanning area of the target object)
Regarding claim 2,
Sun further teaches wherein the determining one or more scanning parameters of the target subject based on the combination model includes:
Displaying the combination model on a user interface (pg. 51 which discloses as yet another example, the processing device 120 may generate a reference object model indicating the internal structure of the target object, and the terminal device displays the reference object model);
Obtaining a first input of a user via the user interface generated according to the combination model (pg. 64 which discloses for example, the processing device 120 may cause the terminal device (e.g., terminal device 140) to display the reference object model. The user can through the terminal device, for example by drawing the area corresponding to the scanning area on the reference object model of the display, to annotation the area corresponding to the scanning area); and
Determining the one or more scanning parameters based on the first input of the user (pg. 64 which discloses the user can through the terminal device, for example drawing the area corresponding to the scanning area on the reference object model of the display, to annotation the area corresponding to the scanning area and the area corresponding to the scan area may be semi-automatically determined by the processing device 120 based on the image segmentation algorithm and the information provided by the user, then the processing device can make one or more components of the first medical imaging device (e.g. a C-shaped arm) to adjust their respective position, such that the scanning area is imaged by the first medical imaging device as a target. In addition, the processing device 120 may cause the first medical imaging device to scan the scanning area of the target object).
Regarding claim 3,
Wherein the operations further include:
Obtaining a second input of the user via the user interface generated according to the combination model The user can select the scanning area (e.g., by drawing the area corresponding to the scanning area, by selecting at least two reference points corresponding to the scanning area) on the displayed image data or the object model via the input component of the terminal device (e.g., a mouse, a touch screen)); and
Adjusting the one or more scanning parameters based on the second input of the user (pg. 34 which discloses optionally, a user may input instructions or information in response to the notification. By way of example only, the user can input instructions for adjusting the position of the target object and/or the position of the component of the medical imaging device based on the possible occurrence of a collision and further recites In some embodiments, after adjusting the position of the target object and/or the position of the component of the medical imaging device, the processing device 120 can be based on the updated position information of the component of the medical imaging device to obtain the updated plan track of the component. Examiner notes that any adjustments to the scanning parameters are necessarily based on input provided by the user when drawing the area and/or selecting at least two reference points due to the breadth of “based on”)
Regarding claim 4,
Wherein the one or more scanning parameters include a scanning range defined by a starting point and an ending point (pg. 64 which discloses the processing device 120 can determine the scanning area according to the imaging protocol of the target object, and dividing the area corresponding to the scanning area from the reference object model according to the image segmentation algorithm. Examiner notes that a scanning range/area would necessarily be defined by a starting point and an ending point (i.e. the length of the area)),
The operations further include:
In a first image acquisition stage before a target portion of the target subject is injected with a contrast agent, causing a second imaging device to arrive at the starting point and/or the ending point (pg. 64 which discloses then the processing device can make one or more components of the first medical imaging device (e.g. a C-shaped arm) to adjust their respective position, such that the scanning area is imaged by the first medical imaging device as a target.; and
Obtaining a first image by causing the second imaging device to perform a scan (pg. 89 which discloses a plurality of medical images may include a first medical image obtained before the contrast agent is injected to the target object).
Regarding claim 9,
Sun further discloses further comprising: causing a second imaging device to perform multiple rounds of scans on the target portion of the target subject based on the one or more scanning parameters (see at least fig. 24B depicting multiple rounds of scans each round corresponding to a different angle and disclosure in at least pg. 68).
Regarding claim 10,
Sun further discloses wherein the target portion includes the heart of the target subject (pg. 52 which discloses the scanning area can be the chest of the target object, then the POI can be the central point 1660 of the heart of the target object), and the causing a second imaging device to perform multiple rounds of scans on the target portion of the target subject based on the one or more scanning parameters includes: causing the second imaging device to perform a rotation scan on the heart, the rotation scan including the multiple rounds of scans, each of the multiple rounds of scans corresponding to a rotation angle (see at least fig. 24B and corresponding disclosure in at least pg. 68 which discloses The first scan region may be a region of the patient 2470, the region being passed through the first radiation 2461 emitted at the location A by the radiation source 2460. when the radiation source 2460 moves from the position A to the position A 'and the detector 2480 moves from the position B to the position B', the medical imaging device can scan the patient's second scanning area at a second angle).
Regarding claim 13,
Sun further discloses wherein the anatomical structure model of the at least a portion of the target subject is acquired based on one or more images acquired by a second imaging device (pg. 93 which discloses reference image data (e.g., CT image of the target object or the first target image in the operation 3140).
Regarding claim 14,
Sun further discloses wherein the one or more scanning parameters include a scanning range (pg. 64 which discloses the processing device 120 can determine the scanning area according to the imaging protocol of the target object, and dividing the area corresponding to the scanning area from the reference object model according to the image segmentation algorithm), the operations further include: causing one or more components of a second imaging device to move to a target position, at the target position (pg. 64 which discloses then, the processing device 120 can make one or more components of the first medical imaging device (e.g., C-shaped arm) to adjust their respective position, such that the scanning area is imaged by the first medical imaging device as a target), the scanning range of the target subject being located at an isocenter of the second imaging device (pg. 8 which discloses aligning the imaging isocenter of the medical imaging device to the scanning area and pg. 33 which discloses the processing device 120 (e.g., control module 530) can adjust one or more components of
the medical imaging device, so that the imaging of the medical imaging device aligned to the scanning
area of the target object); and causing the second imaging device to perform a scan on the target subject (pg. 64 which discloses the processing device 120 may cause the
first medical imaging device to scan the scanning area of the target object).
Regarding claim 15,
Sun further discloses wherein the one or more scanning parameters include a rotation angle of one or more components of a second imaging device for performing a scan on the target subject (pg. 15 which discloses the adjustment of the at least one or more parameter values may cause a change in the scan angle of the medical imaging device and the one or more scanning parameters of the medical imaging device may include at least one of a scanning angle and pg. 11 which discloses determining one or more motion parameters of the detector according to the target distance and the thickness of the scanning area. See also pg. 51 which discloses For example, the processing device 120 can be based on the position of the imaging center point of the medical imaging device and the position of the POI of the target object, determining the rotating scheme of the medical imaging device in the scanning process of the target object), and the determining one or more scanning parameters of the target subject based on the combination model includes: determining the rotation angle by adjusting an initial rotation angle (pg. 47 which discloses the adjustment of at least one of the one or more parameter values may cause a change in the scan angle of the medical imaging device. For example, if the angle of inclination of the scanning bed (e.g., the angle between the X-Y plane of the coordinate system 1205 and the upper surface of the scanning bed) is changed, the scanning angle can be changed. See also disclosure of fig. 15 pg. 51-52 in which the rotation scheme is determined and comprises adjusting the rotation angle from an initial rotation angle)
Regarding claim 18,
Sun further teaches wherein the one or more scanning parameters includes a scanning route of one or more components of a second imaging device for performing a scan on the target subject (for each component in one or more components in the medical imaging device, processing device 120 (e.g., obtaining module 510) can obtain the planned trajectory of component in the scanning process of the target object execution), the scanning route indicating a moving trajectory of the one or more components of the second imaging device (pg. 61 which discloses the second representation can be moved to different locations in accordance with the planned trajectory of the components of the medical imaging device), and the operations further include: predicting whether a collision involves in the scan based on the scanning route (pg. 61 which discloses the processing device can determine the distance between the second representation of the component and the first representation of the target object and may determine a collision may occur between the target object and the component based on the distance); in response to determining that the collision involves in the scan, adjusting the scanning route (pg. 78 which discloses the processing device 120 may adjust at least one of the one or more motion parameters of the detector to avoid collision. See also pg. 34 which discloses process may be repeated until the target object and one or more components of the medical imaging device is not likely to occur); and causing the second imaging device to perform the scan based on the adjusted scanning route (Examiner notes that upon adjusting the one or more motion parameters of the detector, the system would necessarily perform the scan based on the adjusted scanning route).
Regarding claim 20,
Sun further discloses wherein the determining one or more scanning parameters of the target subject based on the combination model includes: obtaining a trained machine learning model (pg. 70 which discloses a scanning parameter determination model can be calculated, predetermined and stored in the storage device and further discloses the processing device 120 may be using at least one training sample to train the preliminary model to obtain the scanning parameter determining model); and determining the one or more scanning parameters of the target subject based on the combination model and the trained machine learning model (pg. 30 which discloses in step 407 the processing device may further determine one or more of the second parameter values in one or more scanning parameters based on the target equivalent thickness, so as to realize automatic brightness stability, where disclosure of pg. 70 is directed towards determining the second parameter value using the scanning parameter determining module. Examiner notes that step 407 is based on the reference object model (i.e. combination model) and thus is based on the combination model and the trained machine learning model of pg. 70)
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Russinger et al. (US 7406148 B2), hereinafter Russinger.
Regarding claim 5,
Sun teaches the elements of claim 1 as previously stated. Sun further teaches wherein the operations further include: in a second image acquisition stage after the target portion of the target subject is injected with the contrast agent obtaining a second image by causing the second imaging device to perform a scan (pg. 89 which discloses a second medical image obtained after the contrast agent is injected to the target object)
While Sun is directed to obtaining images after the contrast agent is injected and further directed towards imaging of blood vessels (see at least pg. 92 disclosing that the second medical image may be obtained after the contrast agent is injected into the target object. The black lines in the image represent the blood vessels of the target object) and further teaches in pg. 50 that the processing device may send instruction to the driving device of the scanning bed or scanning bed to move the scanning bed to the target position and the instructions may include various parameters which include a moving speed. Such disclosure, however, fails to explicitly teach determining a blood flow velocity of the target portion and adjusting a moving speed of the second imaging device based on the blood flow velocity.
Russinger, in a similar field of endeavor involving medical imaging, teaches in a second image acquisition stage after the target portion of the target subject is injected with a contrast agent, determining a blood flow velocity of the target portion; adjusting a moving speed of a second imaging device based on the blood flow velocity, and obtaining a second image medical image by causing the second imaging device to perform a scan (Col. 3 lines 29-39 which discloses the propagation speed or propagation of the contrast medium in the scanning direction can be determined in different ways, Col. 3 line 61-Col. 4 line 7 which discloses the direct determination of propagating speed of the contrast medium in the scanning direction is offered by the multi-row configuration of the detector of a multislice computed tomograph, Col. 5 line 65 – Col. 6 line 7 The control unit also controls the scanning speed of a volume scan of the computed tomograph by stipulating the feed rate of the patient positioning table 9 and the period of rotation of the rotary frame in accordance with the present method. To this end, the image computer 11, which is connected to the control unit 14, or the control unit 14 comprises a matching module that carries out the evaluation of the attenuation values of the detector rows in order to determine the propagation of the contrast medium in accordance with at least one embodiment of the present method and Col. 3 lines 1-5 which discloses the matching of the scanning speed is performed in a way known per se by changing the feed rate of the patient positioning table and the propagation speed of the contrast medium in the scanning direction corresponds to the rate of blood flow in this direction).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Sun to include determining the blood flow velocity of the target portion and adjusting a moving speed of the second imaging device based on the blood flow velocity as taught by Russinger in order to optimally match the scanning speed to the flow of contrast medium in the scanning direction (Russinger Col. 4 line 41-44). Doing so would permit a minimization of the contrast medium requirement, and thus a lesser contrast medium burden on the patient and reduction in examination costs (Russinger Col. 3 lines 45-50).
Regarding claim 6,
Sun further teaches wherein the operations further include: obtaining a target image of the target portion of the target subject based on the first image and the second image (pg. 91 which discloses the processing device 120 can be based on a plurality of medical images to generate the target image of the target object, where it can be based on the motion of the target object detected during generating the target image to perform motion correction and a target image of a target object may be generated by performing motion correction on a plurality of medical images (or a portion thereof) based on the detected motion of the target object. For example, as described in combination with operation 3110, a plurality of medical images may include a first medical image obtained at a first time point before the contrast agent is injected to the target object and a second medical image obtained at a second time point after the contrast agent is injected to the target object).
Regarding claim 7,
Russinger, as applied to claim 5 above, further teaches wherein the determining a blood flow velocity of the target portion includes: obtaining a third image of the target portion of the target subject; and determining the blood flow velocity based on the third image (Col. 3 64 – Col. 4 line 7 which discloses thus, directly before the start of the table feed for the volume scan of the CT examination, it is possible to read out the measured values, corresponding to density values, of different detector rows when these are at the start position for the volume scan. By comparing these read out density values of the different detector rows, the arrival time of the contrast medium, and the speed thereof, can be determined. Thus, for example, a starting instant can be set firstly as soon as the first detector row (seen in the scanning direction) detects an increased density value that is above a prescribable threshold value. See also Col. 3 lines 31-39 which discloses a measuring point can be prescribed as a trigger ROI (Region of Interest) that follows the course of a vessel during the CT examination. This is possible by means of suitable data processing owing to the vessel contrast caused by the contrast medium. The density values are determined at this measuring point during the CT examination with a time offset in order to be able to reduce or increase the scanning speed in the event of a variation that points to a reduced contrast medium content.)
Claims 8 is rejected under 35 U.S.C. 103 as being unpatentable over Sun and Russinger, as applied to claim 5 above and further in view of Wagner et al. (US 20190365336 A1), hereinafter Wagner.
Regarding claim 8,
Sun, as modified, teaches the elements of claim 7 as previously stated. Russinger, as applied to claim 7 above, further teaches wherein the determining a blood flow velocity of the target portion includes: determining a velocity of the contrast agent in the target subject; and determining the blood flow velocity based on the velocity of the contrast agent (see Col. 3 line 29-Col. 4 line 7 disclosing the methods for determining the velocity/speed of the contrast agent)
While examiner notes that the changing rate/speed of the contrast agent would appear to require determination of a moving distance and a duration corresponding to the moving distance of the contrast agent, this feature is not explicitly disclosed by Russinger.
Wagner, in a similar field of endeavor involving medical imaging, teaches wherein determining a blood flow velocity of a target portion includes:
Determining a moving distance and a duration corresponding to the moving distance of the contrast agent in the target subject ([0054]-[0059] which discloses the one or more processors determine a duration T.sub.1 and T.sub.2 as well as a difference (thus duration) between the two and [0044] which discloses the one or more processors can determine the distance between a first point and a second point (which is a moving distance of the contrast agent/blood flow)); and determining the blood flow velocity based on the moving distance and the duration corresponding to the moving distance of the contrast agent in the target subject ([0059] which discloses the one or more processors can determine (or compute) the velocity of the contrast particles, which is also the arterial flow velocity, as the distance d between the first and second points divided by the absolute difference between the time durations T.sub.1 and T.sub.2 of the first and second time intervals).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified Sun, as currently modified, to include determining a blood flow velocity as taught by Wagner in order to provide flexibility with regard to time intervals (or time windows) over which the integrations can be performed. Such flexibility leads to reduced complexity and increased estimation robustness and accuracy (Wagner [0017]). Furthermore, such a modification amounts to merely a simple substitution of one known blood flow velocity calculation for another yielding predictable results with respect to blood flow monitoring thereby rendering the claim obvious (MPEP 2143).
Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Xu (US 20190046132 A1), hereinafter Xu.
Regarding claim 16,
Sun teaches the elements of claim 15 as previously stated. Sun further teaches determining a distance between the target subject and the one or more components of the second imaging device based on the combination model (pg. 11 which discloses determining one or more motion parameters of the detector may include obtaining the target distance between the detector and the target object and pg. 61 which discloses In some embodiments, for the component of the medical imaging device, the processing device 120 can determine the distance between the second representation of the component in the virtual scanning process and the first representation of the target object, the detector may be a flat panel detector. Pg. 77 which discloses one or more motion parameters may include a motion distance, a motion direction, a motion speed, or the like, or any combination thereof).
Sun fails to explicitly teach wherein the determining the distance is part of adjusting the initial rotation angle and that adjusting the initial rotation angle is based on the distance.
Nonetheless,
Xu, in a similar field of endeavor involving medical imaging, teaches adjusting an initial rotation angle includes ([0047] a the processing device 140 may update the position of the component, update the orientation of the component, or the like, or a combination thereof): determining a distance between the target subject and the one or more components of the second imaging device and adjusting the initial rotation angle based on the distance ([0047] a the processing device 140 may update the position of the component, update the orientation of the component, or the like, or a combination thereof , the trajectory updating module 430 may update the initial trajectory by increasing a distance between a component and a subject (e.g., a patient, a table). In some embodiments, the trajectory updating module 430 may update the initial trajectory by updating an initial orientation of the component on an initial position. See also [0079] and [0105] which discloses the orientation refers to an angle of the component relative to a tangent of the trajectory at the corresponding position of the component)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have modified adjusting the initial rotation angle of Sun to include the determining of the distance and to be based on the distance as taught by Xu in order to
Regarding claim 17,
Xu, as applied to claim 16 above, further teaches wherein the adjusting the initial rotation angle based on the distance includes: in response to determining that the distance is less than a distance threshold, adjusting the initial rotation angle to obtain the rotation angle, a distance between the target subject and the one or more components of the second imaging device after the initial rotation angle is adjusted exceeds the distance threshold (see at least figs. 5 and 6 and further it is noted that the goal is for collision to not occur thus the distance to exceed the distance threshold , thus the distance between the subject and the one or more components of the second imaging device after the initial rotation angle is adjusted exceeds the distance threshold)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BROOKE L KLEIN whose telephone number is (571)270-5204. The examiner can normally be reached Mon-Fri 7:30-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at 5712700552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BROOKE LYN KLEIN/Primary Examiner, Art Unit 3797