DETAILED ACTION
Response to Arguments
The amendment filed 7 January 2026 has been entered in full. Accordingly, claims 1-13, 16-19, 33, 69, and 70 are pending in the application.
Regarding the rejections under 35 U.S.C. 103, the applicant amends independent claims 1 and 33 to recite “and the inter-element crosstalk refers that photons falling on a detector unit of a detector are falsely sensed by one or more neighboring detector units around the detector unit”. The applicant argues that the prior art of record does not disclose or suggest these limitations.
After an updated search, the examiner relies on newly found reference Dijkstra et al. (Hyperspectral demosaicking and crosstalk correction using deep learning, 2019, Machine Vision and Applications, Vol. 30, Pages 1-21), hereinafter “Dijkstra”, in a new grounds of rejection necessitated by the applicant’s amendment.
Regarding the rejections under 35 U.S.C. 103, the applicant argues that independent claim 19 not met by the prior art, since in Mailhe “the ‘matrix’ is merely applied to the gradient to refine direction”, whereas claim 19 requires “a correction coefficient matrix” that is “of the array dimension and is determined based on a trained artifact correction model” and is used to determine processed image data based on the imaging data. The examiner respectfully disagrees. Mailhe states in [0056] that the control/matrix “is provided to constrain the optimization and/or as a change for varying the input image Y for each iteration of the optimization”. The examiner asserts that “correction coefficient matrix” is broadly recited and is met by this description in Mailhe. In [0057], Mailhe states: “For the control of altering the images, the control is based on information from previous iterations and/or from a current iteration.” Therefore, the control/matrix is “determined based on a trained artifact correction model”. For these reasons, the art rejection of claim 19 is respectfully maintained.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-11, 17, 18, 33, and 69 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mailhe (U.S. Pub. No. 2017/0372193), as cited in the IDS filed 29 May 2025, in view of Dijkstra (Hyperspectral demosaicking and crosstalk correction using deep learning, 2019, Machine Vision and Applications, Vol. 30, Pages 1-21).
Claim 1 is met by the combination of Mailhe and Dijkstra, wherein
Mailhe teaches:
A system (See the Abstract.), comprising:
at least one storage device including a set of instructions (See [0066].); and
at least one processor in communication with the at least one storage device, wherein when executing the set of instructions, the at least one processor is directed to cause the system to perform operations including (See [0066].):
obtaining imaging data (See [0022]: “In act 30, a medical scanner acquires an image representing a patient.”), wherein the imaging data includes an artifact…(See [0027]: “The image may include one or more artifacts or distortions. Different modalities of imaging are susceptible to different types of artifacts or corruption. The physics for scanning and/or the processing to create the image from the scan may generate an artifact. Motion of the patient or sensor performing the scan may generate an artifact. Example artifacts in medical imaging include noise, blur (e.g., motion artifact), shading (e.g., blockage or interference with sensing), missing information (e.g., missing pixels or voxels in inpainting due to removal of information or masking), reconstruction (e.g., degradation in the measurement domain), and/or under-sampling artifacts (e.g., under-sampling due to compressed sensing). Other artifacts may be in the image."); and
determining an artifact corrected image based on a trained artifact correction model and the imaging data (See [0029]-[0030]: “The image for a given patient is to be corrected. For example, the correction is for CT, X-ray and MR denoising, MR and ultrasound reconstruction, or MR super-resolution. The correction uses a probability from a generative model to model what a good image is.”).
Mailhe does not disclose the following; however, Dijkstra teaches:
obtaining imaging data, wherein the imaging data includes an artifact caused by inter-element crosstalk (See page 2, right column: “The aim of this research is to increase the spatial resolution and decrease crosstalk of hyperspectral images taken with a mosaic image sensor. By taking advantage of the flexibility and trainability of deep neural networks [26-28]…”.), and the inter-element crosstalk refers that photos falling on a detector unit of a detector are falsely sensed by one or more neighboring detector units around the detector unit (See page 2, left column: “A standard RGB camera with a Bayer filter [12] is an example of this type of system. Recently these types of imaging systems have been further extended to 3×3, 4×4 and 5×5 mosaics [13] in both visible and near-infrared spectral ranges…However these sensors suffer from a detrimental effect called crosstalk [14], which means that distinct spectral channels also receive some response of other spectral bands.”);
Mailhe and Dijkstra together teach the limitations of claim 1. Dijkstra is directed to a similar field of art (correction of artifacts in images). Therefore, Mailhe and Dijkstra are combinable. Mailhe suggests the flexibility to handle any type of artifact in medical and non-medical images—see [0036]: “The same generative model may be used for correction of images suffering from any of various types of artifacts.” Modifying the system and method of Mailhe by adding the capability of processing “imaging data [that] includes an artifact caused by inter-element crosstalk, and the inter-element crosstalk refers that photos falling on a detector unit of a detector are falsely sensed by one or more neighboring detector units around the detector unit”, as taught by Dijkstra, would yield the expected and predictable result of more comprehensive reduction of artifacts in input images. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Mailhe and Dijkstra in this way.
Claim 2 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 1, wherein
And Mailhe further teaches:
the imaging data is raw image data, the imaging data has elements arranged in an array of an array dimension (See [0024]: “Any type of medical image and corresponding medical scanner may be used. In one embodiment, the medical image is a computed tomography (CT) image acquired with a CT system. For example, a chest CT dataset may be used for detecting a bronchial tree, fissures, and/or vessels in the lung. For CT, the raw data from the detector is reconstructed into a three-dimensional representation.” Rows and columns of pixels of the image data meet the claimed “elements arranged in an array of an array dimension”.), and the trained artifact correction model is associated with the array dimension of the array (See [0030]: “The correction uses a probability from a generative model to model what a good image is.” The generative model is “associated with the array dimension of the array”, since it acts on pixels of the rows and columns of the image.).
Claim 3 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 2, wherein
And Dijkstra further teaches:
the determining the artifact corrected image based on the trained artifact correction model and the imaging data includes: determining processed image data based on the trained artifact correction model and the raw image data; and determining the artifact corrected image by image reconstruction using the processed image data (See page 11, section 5.3, output/corrected image data is based on a trained neural network and raw image data having crosstalk artifacts.).
Mailhe and Dijkstra together teach the limitations of claim 3. Dijkstra is directed to a similar field of art (correction of artifacts in ultrasound images). Therefore, Mailhe and Dijkstra are combinable. Mailhe does not appear to specify determining the artifact corrected image by reconstruction from processed image data that is generated from raw image data. Modifying the system and method of Mailhe by adding the capability of “determining processed image data based on the trained artifact correction model and the raw image data; and determining the artifact corrected image by image reconstruction using the processed image data”, as taught by Dijkstra, would yield the expected and predictable result of having a processing chain for the case of input images. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Mailhe and Dijkstra in this way.
Claim 4 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 3, wherein
And Dijkstra further teaches:
the determining the processed image data based on the trained artifact correction model and the raw image data includes: obtaining the processed image data by inputting the raw image data into the trained artifact correction model (See page 11, left column: “Prior knowledge about the input mosaic image and the hyper spectral cube in the output can be exploited to train an end-to-end demosaicking deep neural network.”).
See the motivation to combine in the treatment of claim 3.
Claim 5 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 3, wherein
And Mailhe further teaches:
the determining the processed image data based on the trained artifact correction model and the raw image data includes: determining a correction coefficient matrix based on the trained artifact correction model; and determining the processed image data based on the correction coefficient matrix and the raw image data (See [0056]-[0057]: “In act 36, a control is provided for the optimization. The control is provided to constrain the optimization and/or as a change for varying the input image Y for each iteration of the optimization….The number of iterations to use, step sizes, preconditioners (e.g., a matrix applied to the gradient to refine direction), and/or other setting for optimization are provided.” The examiner asserts that the preconditioner meets the claimed “correction coefficient matrix”, and since the control/matrix is provided for each iteration of the optimization, the matrix is based on the model.).
Claim 6 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 1, wherein
And Mailhe further teaches:
the imaging data is an original image (See [0022]: “In act 30, a medical scanner acquires an image representing a patient.”), and the determining the artifact corrected image of the original image based on the trained artifact correction model and the imaging data includes: obtaining the artifact corrected image by inputting the original image into the trained artifact correction model (See [0029]-[0030]: “The image for a given patient is to be corrected. For example, the correction is for CT, X-ray and MR denoising, MR and ultrasound reconstruction, or MR super-resolution. The correction uses a probability from a generative model to model what a good image is.”).
Claim 7 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of any one of claim 1, wherein
And Mailhe further teaches:
the trained artifact correction model is determined based on a training process (See [0030]: “The generative model encodes training data to a few independent latent variables or learned features, and may generate synthetic data by sampling the latent variables.”), the training process including: obtaining a plurality of sample sets, wherein each sample set includes sample imaging data and reference sample imaging data; and obtaining the trained artifact correction model by training a preliminary artifact correction model based on the plurality of sample sets (See [0041]: “The training of the generative model uses all or some of these sources of training data. In alternative embodiments, the good and bad quality training data is used to initially train the generative model rather than using the poor-quality training images for refinement.”).
Claim 8 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 7, wherein
And Mailhe further teaches:
for each sample set, the sample imaging data includes sample elements arranged in a sample array of a sample array dimension (See [0041]. The bad training data are images with pixels (“sample elements”) in rows and columns (“in a sample array of a sample array dimension”).), the reference sample imaging data includes reference sample elements arranged in a reference sample array of a reference sample array dimension (See [0041]. The good training data are images with pixels (“reference sample elements”) in rows and columns (“in a reference sample array of a sample array dimension”).), and the sample array dimension and the reference sample array dimension equal the array dimension of the original image (Given how the ”dimension” is interpreted as image rows or columns, each of the good training data, bad training data, and input/original image have rows or columns and therefore have dimensions that are “equal”.)
Claim 9 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 8, wherein
And Mailhe further teaches:
the sample imaging data or the reference sample imaging data is acquired by a sample imaging device; the sample imaging device includes a sample detector; and the sample detector includes a sample detector unit array of the array dimension (See [0024]: “The data may be ultrasound data. Beamformers and a transducer array scan a patient acoustically. The polar coordinate data is detected and processed into ultrasound data representing the patient.”).
Claim 10 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 9, wherein
And Mailhe further teaches:
the sample detector unit array is configured as a plurality of sample detector modules (See [0024]: “The data may be ultrasound data. Beamformers and a transducer array scan a patient acoustically. The polar coordinate data is detected and processed into ultrasound data representing the patient.” The examiner asserts that the cited transducer array includes a series of elements that direct sound waves.).
Claim 11 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 9, wherein
And Dijkstra further teaches:
the obtaining the plurality of sample sets includes: for a sample set of the plurality of sample sets, obtaining the reference sample imaging data; and determining the sample imaging data by adding a simulated sample crosstalk artifact to the reference sample imaging data (See page 14, right column: “Two methods for increasing the training set size are either to increase the number of training images or to increase the size of the training images (explained in Sect. 5).” The examiner asserts that the resulting data once the size of the training images is increased serves as the claimed “obtaining the reference sample imaging data”. The examiner further asserts that the upscaled training set includes crosstalk and accordingly meets “adding a simulated sample crosstalk artifact to the reference sample imaging data”.).
See the motivation to combine in the treatment of claim 1.
Claim 17 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 7, wherein
And Dijkstra further teaches:
the obtaining the plurality of sample sets includes: for a sample set of the plurality of sample sets, obtaining, from a sample imaging device installed with an anti-crosstalk apparatus, reference sample imaging data; and obtaining, from the sample imaging device, sample imaging data having a sample crosstalk artifact (See [0047]: “Each of the pairs of ultrasound imaging data includes a single-line ultrasound imaging data captured based on ultrasound transducer(s) receiving a single line in response to a transmitted single narrow-focused pulse (e.g., based on SLT and SLA), and a multiple-line ultrasound imaging data captured based on ultrasound transducer(s) receiving multiple narrow-focused lines in response to simultaneously transmitted line(s) with predefined focus width (e.g., based on MLT, MLA, or MLT-MLA).”).
See the motivation to combine in the treatment of claim 1.
Claim 18 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 7, wherein
And Dijkstra further teaches:
the obtaining the plurality of sample sets includes: for a sample set of the plurality of sample sets, obtaining sample imaging data having a sample crosstalk artifact; and determining reference sample imaging data by removing the sample crosstalk artifact from the sample imaging data according to a predetermined algorithm (See Table 2 on page 13, “preCT” or “postCT” columns. The caption of Table 2 states: “In preCT and postCT crosstalk correction is applied before or after upscaling respectively”. Since the examiner interprets the result of upscaling training data to meet the claimed “determining reference sample imaging data”, the crosstalk correction then meets “by removing the sample crosstalk artifact from the sample imaging data according to a predetermined algorithm”.).
See the motivation to combine in the treatment of claim 1.
Claim 33 is met by the combination of Mailhe and Dijkstrafor the reasons given in the treatment of claim 1.
Claim 69 is met by the combination of Mailhe and Dijkstra, wherein
The combination of Mailhe and Dijkstra teaches:
The system of claim 1, wherein
And Dijkstra further teaches:
a shape of the artifact caused by the inter-element crosstalk relate to locations where detector units involved in the inter-element crosstalk are located in a detector unit array (See page 8, right column: “A mosaic imaging sensor suffers from crosstalk. This means that each filter in the mosaic is not only sensitive to the designed spectral range, but information from other bands bleeds through. This is mostly regarded as an unwanted effect and can be observed by a desaturation of the image colors [14].” The examiner asserts that the desaturation of image colors appears in the claimed “shape of the artifact”.).
See the motivation to combine in the treatment of claim 1.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mailhe (U.S. Pub. No. 2017/0372193), as cited in the IDS filed 29 May 2025, in view of Senouf (U.S. Pub. No. 2021/0256700).
Claim 19 is met by the combination of Mailhe and Senouf, wherein
Mailhe teaches:
A system (See the Abstract.), comprising:
at least one storage device including a set of instructions (See [0066].); and
at least one processor in communication with the at least one storage device, wherein when executing the set of instructions, the at least one processor is directed to cause the system to perform operations including (See [0066].:
obtaining imaging data of an original image (See [0022]: “In act 30, a medical scanner acquires an image representing a patient.”), wherein the imaging data has elements arranged in an array of an array dimension (See [0024]: “Any type of medical image and corresponding medical scanner may be used. In one embodiment, the medical image is a computed tomography (CT) image acquired with a CT system. For example, a chest CT dataset may be used for detecting a bronchial tree, fissures, and/or vessels in the lung. For CT, the raw data from the detector is reconstructed into a three-dimensional representation.” Rows and columns of pixels of the image data meet the claimed “elements arranged in an array of an array dimension”.), and the imaging data includes an artifact…(See [0027]: “The image may include one or more artifacts or distortions. Different modalities of imaging are susceptible to different types of artifacts or corruption. The physics for scanning and/or the processing to create the image from the scan may generate an artifact. Motion of the patient or sensor performing the scan may generate an artifact. Example artifacts in medical imaging include noise, blur (e.g., motion artifact), shading (e.g., blockage or interference with sensing), missing information (e.g., missing pixels or voxels in inpainting due to removal of information or masking), reconstruction (e.g., degradation in the measurement domain), and/or under-sampling artifacts (e.g., under-sampling due to compressed sensing). Other artifacts may be in the image.");
obtaining a correction coefficient matrix of the array dimension, wherein the correction coefficient matrix is of the array dimension and is determined based on a trained artifact correction model (See [0056]-[0057]: “In act 36, a control is provided for the optimization. The control is provided to constrain the optimization and/or as a change for varying the input image Y for each iteration of the optimization….The number of iterations to use, step sizes, preconditioners (e.g., a matrix applied to the gradient to refine direction), and/or other setting for optimization are provided.” The examiner asserts that the preconditioner meets the broadly claimed “correction coefficient matrix”, and since the control/matrix is provided for each iteration of the optimization, the matrix is based on the model.);
determining processed image data based on the correction coefficient matrix and the imaging data (See [0029]-[0030]: “The image for a given patient is to be corrected. For example, the correction is for CT, X-ray and MR denoising, MR and ultrasound reconstruction, or MR super-resolution. The correction uses a probability from a generative model to model what a good image is.”); and
Mailhe does not disclose the following; however, Senouf teaches:
and the imaging data includes an artifact caused by inter-element crosstalk (See [0078]: “At least some of the systems, methods, apparatus, and/or code instructions described herein improve the quality of US images generated by the MLT set-up, by the trained CNN describe herein that corrects cross-talk artifacts and/or reduces the effects of cross-talk artifacts on the quality of the final reconstructed image.”)
determining an artifact corrected image of the original image based on the processed image data (See [0047]: “The CNN outputs adjusted narrow-focused received lines. An adjusted ultrasound image is computed (e.g., reconstructed) according to the adjusted narrow-focused received lines.”).
Mailhe and Senouf together teach the limitations of claim 19. Senouf is directed to a similar field of art (correction of artifacts in ultrasound images). Therefore, Mailhe and Senouf are combinable. Mailhe suggests the flexibility to handle any type of artifact—see [0036]: “The same generative model may be used for correction of images suffering from any of various types of artifacts.” Modifying the system and method of Mailhe by adding the capability of processing “imaging data [that] includes an artifact caused by inter-element crosstalk”, as taught by Senouf, would yield the expected and predictable result of more comprehensive reduction of artifacts in medical images. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Mailhe and Senouf in this way.
Allowable Subject Matter
Claims 12, 13, 16 and 70 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding the reasons for indicating allowable subject matter for claims 12, 13, and 16, see the reasons stated on page 15 of the Non-Final Rejection dated 7 October 2025. New claim 70 includes limitations similar to the limitations in claim 12 and is indicated as having the same allowable subject matter.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN S LEE whose telephone number is (571)272-1981. The examiner can normally be reached 11:30 AM - 7:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571)270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Jonathan S Lee/Primary Examiner, Art Unit 2677