DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . It is responsive to the submission dated 03/20/2024. Claims 1-16 are presented for examination, of which, claims 1, 8 and 16 are independent claims.
Information Disclosure Statement
2. The information disclosure statements (IDSs) submitted on 03/20/2024 are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
4. Claims 1-5, 7-8, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Han (US 20210056734) in view of Fu et al. (US 20220058821).
Considering claim 1, Han discloses a video processing system comprising: a video recording unit that records a first medical video (MRI image) captured by a target medical apparatus, which is a first medical apparatus and a second medical video (CT scan image) captured by a subject medical apparatus, which is a second medical apparatus (for example, Han discloses an image conversion system 400, fig. 5, comprising: a training database 410 that stores a CT image (e.g., destination image) captured using a second modality (CT imaging device) that is the target medical device, and an MRI image (e.g., original image) captured using a first modality (MRI imaging device) that is the target medical device. See figs. 4C and fig. 5 and paras. 82-85, 87-89, and 91-92);
a conversion parameter generation unit that generates a conversion parameter which brings image quality of the second medical video close to image quality of the first medical video by using the first medical video and the second medical video recorded in the video recording unit (for example, Han discloses a training unit 430 that generates parameters of a prediction model that brings the image quality of the MRI image closer to the image quality of the CT image (destination image) using the CT image (destination image) and the MRI image (original image) stored in the training database 410; see fig. 5 and paras. 84-93); and
an image quality conversion processing unit that converts image quality of a medical video captured by the subject medical apparatus by using the conversion parameter (for example, Han discloses an image conversion unit 440 that converts the image quality of medical moving images captured using the first modality (e.g., MRI imaging device) using the parameters of the prediction model to generate a normalized destination image with a standardized alignment, resolution, and/or intensity value distribution; see paras. 85-93). See also fig. 6 and paras. 99-105.
The Han reference differs from claim 1 in that Han fails to specifically refer to the captured images as medical video images.
However, such feature is well-known in the art, as evidence by Fu (see paras. 31, 73, 90 and 98-100).
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to have modified the teaching of Han to include and replace the captured medical images with medical video images, in the same conventional manner as taught by Fu; so that the resulted image(s) can be represented as a sequence or stream of moving images.
As per claims 2 and 5, Han, as modified by Fu, discloses the first medical video is a video captured and obtained by actually using the target medical apparatus in daily medical practice, and the second medical video is a video captured and obtained by actually using the subject medical apparatus in daily medical practice, wherein a video prepared for generating the conversion parameter by using the subject medical apparatus is used as the second medical video. See paras. 82-86 of Han and paras. 31-37 of Fu and the rationale above with respect to claim 1 for reason of obviousness.
As per claim 3, Han, as modified by Fu, discloses the target medical apparatus (403), the subject medical apparatus (140), the video recording unit (410 or 420), the conversion parameter generation unit (430), and the image quality conversion processing unit (440) are connected via a network (460). See figs. 1A and 5-6 and paras. 93-97 of Han and figs. 1, 6 and 7 of Fu.
As per claim 4, Han, as modified by Fu, discloses the conversion parameter generation unit (430) is arranged on a providing side (401) that performs a service for providing the conversion parameter separately from the target medical apparatus (403), the subject medical apparatus (140), the video recording unit (410 or 420), and the conversion parameter generation unit (430). See figs. 1A and 5-6 and paras. 97-98 of Han, where it discloses that two or more components of the conversion system 400 may implement a pre-processing stage at a one location (e.g., a radiotherapy treatment room) while the other components be aligned to perform a post-processing at a research department or another hospital.
As per claim 7, Han, as modified by Fu, discloses the conversion parameter generated by using a combination of the target medical apparatus which is specified and the subject medical apparatus which is specified is used for conversion of image quality of a medical video captured by a medical apparatus that is another solid of a same model as the subject medical apparatus which is specified. See paras. 97-99, where it discloses that parts of the components of the conversion system can selectively be used to implement a pre or post-processing training stage may be performed in advance at one location and other parts to performed additional processing separately as part of or prior to the conversion stage using a purpose-built image conversion device.
The invention of claim 8 contains features that correspond in scope with the limitations recited claim 1. As the limitations of claim 1 were found obvious over the combined teachings of Han and Fu, it is readily apparent that the applied prior arts perform the underlying elements. As such, the limitations of claim 8 are, therefore, subject to rejections under the same rationale as claim 1. In addition, Han discloses a medical information processing system (400, fig. 5) comprising: one or more processors (401, 403); and a storage device (410 or 420) that stores a program executed by the one or more processors, wherein the program is executed by the one or more processors to read a parameter conversion rule determined in advance, the parameter conversion rule determined in advance being set on a basis of comparison between a pair of images including an image of a second medical apparatus subjected to image quality conversion processing of bringing image quality close to image quality of an image of a first medical apparatus and an image of the second medical apparatus, and to perform parameter conversion processing of converting image quality on an image of the second medical apparatus that has been input on a basis of the parameter conversion rule determined in advance (for example, Han discloses an image conversion system 400, fig. 5, comprising: a training database 410 that stores a CT image (e.g., destination image) captured using a second modality (CT imaging device) that is the target medical device, and an MRI image (e.g., original image) captured using a first modality (MRI imaging device) that is the target medical device (see figs. 1 and 5); a training unit 430 that generates parameters of a prediction model that brings the image quality of the MRI image closer to the image quality of the CT image (destination image) using the CT image (destination image) and the MRI image (original image) stored in the training database 410; and an image conversion unit 440 that converts the image quality of medical moving images captured using the first modality (e.g., MRI imaging device) using the parameters of the prediction model to generate a normalized destination image with a standardized alignment, resolution, and/or intensity value distribution. See figs. 4C and fig. 5 and paras. 82-85, 87-92 and 99-105). Han, specifically at paras. 97-105, discloses that parts of the components of the conversion system can selectively be used to implement a pre or post-processing training stage may be performed in advance at one location and other parts to performed additional processing separately as part of or prior to the conversion stage using said parameters of the predictive model to bring the quality of the destination image closer to the original pre-stored image in database 410. Additionally, the training unit may use a N3 bias field correction algorithm to correct intensity non-uniformities in the origin image and create a normalized origin image to be used to generate a binary mask to remove undesirable portions of the normalized origin image, by applying thresholding and spatially filtering to the normalized origin image. The training unit 430 may automatically determine the thresholds based on a histogram of image intensity values of the origin image. For example, training unit 430 may determine a threshold value to apply and compare to each intensity value of the normalized origin image with a threshold value. The predetermined threshold value may be a default intensity value. Through this comparison, the training unit 430 may produce a binary mask image having logical “1” or “0” intensity values. The intensity values in the binary mask image depend on whether the corresponding intensity values of the original origin image meet or exceed the threshold value. ) see also paras. 109-120.
The subject-matter of independent claim 16 corresponds in terms of a method to that of independent system claim 1, and the rationale raised above to reject the later also apply, mutatis mutandis, to the former.
5. Claims 9-10, 13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Han (US 20210056734) in view of Fu et al. (US 20220058821) and further in view Hu et al. (US 20170372497).
Regarding claims 9-10 and 15, Han discloses parameter conversion processing for converting tone value of a color image on a basis of stored 3D coordinates for each pixel of the image of the second medical apparatus that has been input. See paras. 120-125. But, Han as modified by Fu fail to teach the parameter conversion rule determined in advance is a LUT.
Hu, in a similar art discloses using a pre-computed lookup table generated by a trained machine learning algorithm when converting an input image (for example, MRI image) to a target output image (for example, a CT image) and storing the output image for further uses. See para. 30.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filling date of the invention to have modified the teachings of Han Fu to include using a pre-computed lookup table, in the same conventional manner as taught by Hu; so to store pre-defined parameter values of the image for quick retrieval, and for replacing complex calculations or lengthy code with a simple search to retrieve the correction parameters for the image.
As per claim 13, Han, as modified by Fu and Hu, discloses the one or more processors (12, fig. 1A) and the storage device (16) are included in an Internet Protocol (IP) converter (items 18-22) connected to the second medical apparatus (defined as items 24-41). See figs. 1A and 5-6 and paras. 93-97 of Han and figs. 1, 6 and 7 of Fu.
Allowable Subject Matter
6. Claims 6, 11-12 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, because the prior art of record fail to teach the video processing system according to claim 1, wherein a plurality of the conversion parameters generated by using a plurality of the target medical apparatuses is generated in advance for the subject medical apparatus determined in advance, and the image quality conversion processing unit switches between a plurality of the conversion parameters to convert image quality of a medical video captured by the subject medical apparatus determined in advance (as recited in claim 6); and the medical information processing system according to claim 8, wherein the pair of images is a pair of images including an image generated by inputting an image of the second medical apparatus to a generator based on a parameter generated by a machine learning model that has learned an image of the first medical apparatus and an image of the second medical apparatus as training data, and an image of the second medical apparatus that has been input (as recited in claim 11).
Conclusion
7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
FUJIWARA et al. (US 20160055647) discloses a medical image processing apparatus according to an embodiment includes processing circuitry. The processing circuitry obtains a plurality of medical image groups in which respective motions of a part inside a subject have been photographed in time series and executes certain processing on the acquired medical image groups. The processing circuitry analyzes the motions in the respective medical image groups. The processing circuitry generates a medical image in which the motions in the respective medical image groups substantially match with each other based on the analyzed motions.
8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WESNER SAJOUS whose telephone number is (571) 272-7791. The examiner can normally be reached on M-F 10:00 TO 7:30 (ET).
Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice or email the Examiner directly at wesner.sajous@uspto.gov.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached on 571-272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WESNER SAJOUS/Primary Examiner, Art Unit 2612
WS
11/01/2025