Prosecution Insights
Last updated: April 19, 2026
Application No. 18/576,210

RADIOGRAPHIC IMAGE ACQUIRING DEVICE, RADIOGRAPHIC IMAGE ACQUIRING SYSTEM, AND RADIOGRAPHIC IMAGE ACQUISITION METHOD

Non-Final OA §103§112§DP
Filed
Jan 03, 2024
Examiner
KUDO, KEN
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Hamamatsu Photonics K K
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
12 currently pending
Career history
12
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
51.3%
+11.3% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election of Species I.C (claims 6 and 17) in the reply filed on 02/17/2026 is acknowledged. Applicant identifies claims 1-3, 6, 11-14, 17, and 22 as readable on the elected species. Applicants’ election is noted with traverse; however, the traversal is not persuasive. Although Applicant argues there would not be a serious search burden if election were not required, the groups as set forth are directed to independent and distinct technical features such that the search is diverse for each group. Specifically, Species I.A requires subject matter directed to building the trained model using image data obtained by adding noise values along a normal distribution to a radiographic image; Species I.B requires building the trained model using a radiographic image obtained by the scintillator layer; Species I.C requires building the trained model using an original radiographic image and a generated noise map; Species I.D requires selecting from a plurality of machine learning models based on the calculated average energy of the radiation used for imaging; Species I.E requires selecting from a plurality of machine learning models based on specific image characteristics; and Species II requires removing noise from the radiographic image using filter processing without a trained model. These differences reflect different underlying concepts, structures, and required prior art searches. Thereby, the search for the generic claims would not encompass the specific subject matter of the claims for each specific group, and a search directed to one group would not necessarily be expected to disclose the most relevant prior art for the other groups because each of these groups requires its own different and distinct search strategies and search queries. The examination of all of the claims would indeed place an undue burden on the Examiner, and for at least these reasons, the restriction requirement is still deemed proper and is therefore made FINAL. Accordingly, examination will proceed on the elected species only, namely Species I.C, and the claims readable thereon. The application has pending claims 1-22; non-elected claims 4-5, 7-10, 15-16, and 18-21 are withdrawn from further consideration. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “an image processing module configured to execute …” as recited in claim 1; “a source configured to irradiate …; and a transport device configured to transport …” as recited in claim 11. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-3, 6, 11-14, 17 and 22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential structural cooperative relationships of elements, such omission amounting to a gap between the necessary structural connections. See MPEP § 2172.01. The omitted structural cooperative relationships are: In particular, claims 1 and 12 recite that the scintillator layer includes “P (P is an integer equal to or greater than 2) scintillator units disposed separately to correspond to the N detection elements” but the claims do not set forth the required relationship between P and N or otherwise explain what it means for the scintillator units to “correspond to” the detection elements. The phrase “to correspond to” is indefinite in this context because it does not tell a POSITA whether: (i) P must equal N in a one-to-one arrangement; (ii) one scintillator unit may correspond to multiple detection elements; (iii) multiple scintillator units may correspond to one detection element; or (iv) the “correspondence” is only approximate positional alignment rather than a fixed structural mapping. Accordingly, it is unclear whether the claims require a one-to-one relationship, a one-to-many relationship, a many-to-one relationship, or some other form of positional or functional correspondence. Because claims 2-3, 6 and 11 depend from claim 1, they inherit this ambiguity, fail to cure the deficiency. Because claims 13-14, 17 and 22 depend from claim 12, they inherit this ambiguity, fail to cure the deficiency. Therefore, the scope of claims 1-3, 6, 11-14, 17 and 22 is not reasonably certain. Obviousness-Type Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-3, 6, 11-14, 17 and 22 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 5-6, and 10 of U.S. Patent No. US 12,591,072 B2 in view of Gerlach (Gerlach et al, EP 1010021 B1, 2004). Claim language in this Application Corresponding claim language in U.S. Patent No. US 12,591,072 B2 1A A radiographic image acquiring device comprising: 1A A radiographic image acquiring device comprising: 1B an imaging device configured to scan radiation passing through a subject in one direction and capture an image of the radiation to acquire a radiographic image; 1B an imaging device configured to scan radiation passing through a target object in one direction and capture an image thereof to acquire a radiographic image; 1C a scintillator layer provided on the imaging device to convert the radiation into light; and 1C a scintillator configured to be provided on the imaging device to convert the radiation into light; and 1D an image processing module configured to execute noise removal processing of removing noise from the radiographic image, 1D an image processing module configured to input the radiographic image to a trained model constructed through machine training in advance using image data and execute a noise removal process of removing noise from the radiographic image, Claim 3 inherited claim 1 The radiographic image acquiring device according to claim 1, wherein the image processing module inputs the radiographic image to a trained model built in advance through machine learning using image data and executes noise removal processing of removing noise from the radiographic image 1E wherein the imaging device includes N (N is an integer equal to or greater than 2) detection elements arrayed in a direction orthogonal to the one direction to detect the light and output detection signals, and a readout circuit configured to output the radiographic image by outputting the detection signal for each of the N detection elements, and 1E wherein the imaging device includes a detection element in which pixel lines each having M (M is an integer equal to or greater than 2) pixels arranged in the one direction are configured to be arranged in N columns (N is an integer equal to or greater than 2) in a direction orthogonal to the one direction and which is configured to output a detection signal related to the light for each of the pixels, and a readout circuit configured to output the radiographic image by adding the detection signals output from at least two of the M pixels for each of the pixel lines of N columns in the detection element and sequentially outputting the added N detection signals, and Claim 2 inherited claim 1 The radiographic image acquiring device according to claim 1, wherein the imaging device includes the detection element configured such that pixel lines each having M (M is an integer equal to or greater than 2) pixels arrayed in the one direction are arrayed in N columns (N is an integer equal to or greater than 2) in a direction orthogonal to the one direction to output a detection signal related to the light for each of the pixels, and the readout circuit configured to output the radiographic image by performing addition processing on the detection signals output from at least two of the M pixels for each of the pixel lines of N columns of the detection element and outputting the N detection signals on which the addition processing is performed. in view of Gerlach (Gerlach et al, EP 1010021 B1, 2004) 1F the scintillator layer includes P (P is an integer equal to or greater than 2) scintillator units disposed separately to correspond to the N detection elements, and Gerlach, in [0019], [0020], [0023], and [0026], teaches low-energy crystal array 240 and high-energy crystal array 280 each consisting of an array of crystal elements physically and optically matched to the corresponding photodiode elements, with separation between adjacent elements and coating 245/285 serving to hold the crystal elements together and create an optical block for reducing optical cross-talk, which corresponds to separately disposed scintillator units with intervening separation structure. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the reference device to include Gerlach’s scintillator layer having P scintillator units disposed separately to correspond to the N detection elements, with a separation unit disposed between adjacent scintillator units, because Gerlach teaches corresponding scintillator/crystal elements optically matched to detector elements and separated to thereby reduce optical cross-talk. Claim language in this Application Corresponding claim language in U.S. Patent No. US 12,591,072 B2 6A The radiographic image acquiring device according to claim 3, wherein the image processing module is configured to 1F wherein the image processing module includes 6B derive an evaluation value obtained by evaluating spread of a noise value from the pixel value of each pixel of the radiographic image on the basis of relationship data indicating a relationship between the pixel value and the evaluation value and generate a noise map that is data in which the derived evaluation value is associated with each pixel of the radiographic image, and 1G at least one processor configured to derive an evaluation value obtained by evaluating a spread of a noise value from a pixel value of each pixel of the radiographic image on the basis of relational data indicating a relationship between the pixel value and the evaluation value and generate a noise map which is data obtained by associating the derived evaluation value with each pixel of the radiographic image, and 6C input the radiographic image and the noise map to the trained model and execute noise removal processing of removing noise from the radiographic image. 1H input the radiographic image and the noise map to the trained model and execute the noise removal process of removing noise from the radiographic image. 11 A radiographic image acquiring system comprising: the radiographic image acquiring device according to claim 1: a source configured to irradiate the subject with radiation; and a transport device configured to transport the subject in the one direction with respect to the imaging device. 5 A radiographic image acquiring system comprising: the radiographic image acquiring device according to claim 1; a source configured to radiate radiation to the target object; and a transport device configured to transport the target object to the imaging device in the one direction. Regarding claims 12-14, 17, and 22, the rationale provided for claims 1-3, 6 and 11 is incorporated herein. The device of claims 1-3, 6 and 11 correspond to the method of claims 12-14, 17 and 22, and performs the steps disclosed herein. Therefore, the claimed invention of ‘072 U.S. Patent obviously encompasses the present claimed invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 11-12, 14, and 22 are rejected under 35 U.S.C. §103 as being unpatentable over Gerlach (Gerlach et al, EP 1010021 B1, 2004) in view of Wang (Wang et al, US 2021/0097662 A1, 2021). Regarding claim 1, Gerlach teaches a radiographic image acquiring device comprising (Gerlach, in [0011], teaches an x-ray imaging system): an imaging device configured to scan radiation passing through a subject in one direction and capture an image of the radiation to acquire a radiographic image; ( Gerlach, in [0013-0015], [0036], and [0039], teaches an x-ray imaging inspection system in which an object to be scanned is passed through a tunnel on a conveyor belt, (e.g. x-rays pass through the object, the attenuation is received by an x-ray sensor, the intensity is scanned on a line-by-line basis), and a two-dimensional image is reconstructed from scan lines in synchronization with movement of the object. ) a scintillator layer provided on the imaging device to convert the radiation into light; and ( Gerlach, in [0020] and [0023], teaches low-energy crystal array 240 and high-energy crystal array 280 of a scintillating type, which convert x-ray photons into visible light photons sensed by corresponding photodiodes. ) wherein the imaging device includes N (N is an integer equal to or greater than 2) detection elements arrayed in a direction orthogonal to the one direction to detect the light and output detection signals, and ( Gerlach, in [0019], [0022], and [0026], teaches a linear array of photodiodes in a scanning imaging system, with each photodiode converting detected light into an analog signal, and further teaches that the low-energy photodiode array consists of N photodiode elements; Gerlach also teaches that, in one embodiment, the number of photodiodes and associated crystal elements is 32, and Figure 2(B) expressly shows N photodiode elements 2301 to 230N arranged in a line. ) a readout circuit configured to output the radiographic image by outputting the detection signal for each of the N detection elements, and ( Gerlach, in [0018-0019], [0032], and [0038-0043], teaches a readout / data processing path receiving detection signals from the photodiodes and performing signal conditioning, multiplexing, analog-to-digital conversion, buffering, and digital signal processing thereon; and teaches processing the read-out detector data into pixel-associated data and then into data representing intensity and coded color for reconstruction / output of a two-dimensional radiographic image. ) the scintillator layer includes P (P is an integer equal to or greater than 2) scintillator units disposed separately to correspond to the N detection elements, and a separation unit disposed between the P scintillator units. ( Gerlach, in [0019], [0020], [0023], and [0026], teaches low-energy crystal array 240 and high-energy crystal array 280 each consisting of an array of crystal elements physically and optically matched to the corresponding photodiode elements, with separation between adjacent elements and coating 245/285 serving to hold the crystal elements together and create an optical block for reducing optical cross-talk, which corresponds to separately disposed scintillator units with intervening separation structure. ) Gerlach teaches removing noise by using signal conditioning 720 for x-ray photon noise [0038-0040], however fails to disclose expressly where Wang teaches an image processing module configured to execute noise removal processing of removing noise from the radiographic image, ( Wang, in [0004], recognizes that CT utilizes x-ray radiation and that images reconstructed from reduced radiation may be noisy and/or may contain artifacts; then in [0013], teaches a MAP-NN apparatus that receives a LDCT/CT image as input, includes T trained neural network modules coupled in series, and generates respective output images that are incrementally denoised.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify Gerlach’s x-ray imaging system to include Wang’s trained neural-network denoising processing, because Gerlach already recognizes that x-ray image data is subject to noise and performs noise-related signal conditioning, while Wang teaches using a trained neural-network image-processing module to reduce noise / artifacts in x-ray CT images. Applying Wang’s denoising processing to Gerlach’s acquired radiographic image would have been a predictable use of a known medical-image denoising technique to improve image quality, such as by further reducing residual noise/artifacts and improving the usefulness of the reconstructed radiographic image. Regarding claim 3, Gerlach [as modified by Wang] teaches the radiographic image acquiring device according to claim 1, wherein the image processing module inputs the radiographic image to a trained model built in advance through machine learning using image data and executes noise removal processing of removing noise from the radiographic image. ( Wang, in [0005], teaches a system including a modularized adaptive processing neural network (MAP-NN) apparatus configured to receive a LDCT image as input, where the MAP-NN apparatus includes a number, T, trained neural network (NN) modules coupled in series, and each trained NN module generates a respective output image corresponding to an incrementally denoised received input image; in [0006], teaches that the trained NN modules are trained based, at least in part, on a training input image; in [0013], likewise teaches a method including receiving a LDCT image as input to the MAP-NN apparatus having trained NN modules coupled in series and generating, by each trained NN module, a respective output image corresponding to an incrementally denoised received test input image; and in [0014], teaches that the trained NN modules are trained based, at least in part, on a training input image. Thus, Wang teaches inputting a radiographic image to a trained model built in advance through machine learning using image data and executing noise removal processing on that image. ) Regarding claims 11-12, 14, and 22, the rationale provided for claims 1, 3 is incorporated herein. In addition, the device of claims 1, 3 correspond to the method of claims 12, 14, as well as the system of claim 11 and 22, and performs the steps disclosed herein. Therefore, the claims are all ineligible. Claims 2 and 13 are rejected under 35 U.S.C. §103 as being unpatentable over Gerlach [as modified by Wang] in view of Luu (Luu et al, US 2019/0179040 A1, 2019). Regarding claim 2, Gerlach [as modified by Wang] teaches the radiographic image acquiring device according to claim 1, wherein the imaging device includes Gerlach [as modified by Wang] teaches a linear array of photodiodes, including an embodiment with N photodiode elements, readout is per-element amplification, multiplexing, and ADC with temporal pixel-pairing between energy channels, however Gerlach [as modified by Wang] fails to teach TDI-style addition processing across M pixel stages where Luu teaches: the detection element configured such that pixel lines each having M (M is an integer equal to or greater than 2) pixels arrayed in the one direction are arrayed in N columns (N is an integer equal to or greater than 2) in a direction orthogonal to the one direction to output a detection signal related to the light for each of the pixels, and ( Luu, in [0057] and [0069-0070], teaches a two-dimensional pixel array comprising rows and columns of individual pixels and operation in TDI mode while the subject moves along the scan direction / in an axial direction [the rows and columns of the 2D pixel array correspond to the claimed M-by-N pixel organization, i.e., multiple pixels arranged in the scan direction and multiple columns arranged orthogonal thereto], thereby teaching pixel lines each having multiple pixels in the scan direction arranged in multiple columns orthogonal thereto, with each pixel outputting an electrical detection signal. ) the readout circuit configured to output the radiographic image by performing addition processing on the detection signals output from at least two of the M pixels for each of the pixel lines of N columns of the detection element and outputting the N detection signals on which the addition processing is performed. ( Luu, in [0038] & [0058], teaches TDI circuitry providing time delay and summing to output combined electrical signals in a line-scanning format for generating two-dimensional scans, thereby teaching the claimed addition processing on signals from at least two of the M pixels and outputting N signals on which the addition processing is performed. ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to configure Gerlach [as modified by Wang]’s detector / readout in accordance with Luu’s TDI arrangement, because Gerlach already uses a moving-object line-scanning X-ray system and Luu teaches, in that same type of scanning context, a two-dimensional pixel array organized in rows and columns with TDI circuitry that performs time delay and summing to produce line-scan outputs. A skilled artisan would have understood that using Luu’s delay-and-summing readout in Gerlach would predictably allow signals from multiple pixels arranged along the scan direction (M) for each detector column (N) to be combined before output, thereby improving the acquired radiographic image data in the expected manner for line-scanning X-ray imaging. Regarding claim 13, the rationale provided for claims 2 is incorporated herein. In addition, the device of claim 2 corresponds to the method of claim 13, and performs the steps disclosed herein. Therefore, the claims are all ineligible. Claims 6 and 17 are rejected under 35 U.S.C. §103 as being unpatentable over Gerlach [as modified by Wang] in view of Guo (Guo et al, “Toward Convolutional Blind Denoising of Real Photographs”, 2018). Regarding claim 6, Gerlach [as modified by Wang] teaches the radiographic image acquiring device according to claim 3, wherein the image processing module is configured to Gerlach [as modified by Wang] teaches a trained-denoiser module, however fails to disclose expressly where Guo teaches derive an evaluation value obtained by evaluating spread of a noise value from the pixel value of each pixel of the radiographic image on the basis of relationship data indicating a relationship between the pixel value and the evaluation value and generate a noise map that is data in which the derived evaluation value is associated with each pixel of the radiographic image, and ( Guo, in [Sec. 3.1, Eq. (1)], teaches a relationship between image irradiance, pixel value L and noise variance σ²(L), and in [Sec. 3.2 and Fig. 2] teaches generating an estimated noise level map σ ^ (y) from the noisy input image y, with the output map being the same size as the input image; further, [Sec. 3.3, Eq. (4)-(5)], teaches pixelwise estimated noise levels and smoothness of the noise level map. ) input the radiographic image and the noise map to the trained model and execute noise removal processing of removing noise from the radiographic image. ( Guo, in [Sec. 3.2 and Fig. 2], teaches that the non-blind denoising subnetwork CNND takes both the noisy image y and the estimated noise level map σ̂(y) as input to obtain the final denoising result x̂. It would have been obvious to input Gerlach’s radiographic image and Guo’s noise map to the trained model and execute noise removal processing thereon. ) It would have been obvious to further modify Gerlach [as modified by Wang], which already teaches acquiring a radiographic image and inputting that image to a trained denoising model for noise removal, with Guo’s teaching of deriving pixelwise noise information, generating a corresponding noise map, and inputting both the image and the noise map to the denoising model, because doing so would have predictably improved denoising by providing the trained model with explicit per-pixel noise information. Regarding claim 17, the rationale provided for claims 6 is incorporated herein. In addition, the device of claim 6 corresponds to the method of claim 17, and performs the steps disclosed herein. Therefore, the claims are all ineligible. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEN KUDO whose telephone number is (571)272-4498. The examiner can normally be reached M-F 8am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. KEN KUDO Examiner Art Unit 2671 /KEN KUDO/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Jan 03, 2024
Application Filed
Apr 01, 2026
Non-Final Rejection — §103, §112, §DP (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month