Prosecution Insights
Last updated: April 18, 2026
Application No. 17/935,153

BONE FRACTURE RISK PREDICTION USING LOW-RESOLUTION CLINICAL COMPUTED TOMOGRAPHY (CT) SCANS

Final Rejection §103
Filed
Sep 26, 2022
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Southwest Research Institute
OA Round
4 (Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1 and 4-14 are pending for examination in the application filed 12/01/2025. Claims 1, 8, and 10 have been amended, claims 2-3 were cancelled, and claims 13-14 are new. Response to Arguments and Amendments Applicant’s arguments, filed 12/01/2025, with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on the combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument, as facilitated by the newly added amendments. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 4-5, and 8-14 are rejected under 35 U.S.C. 103 as being unpatentable over Guha (Guha, I., Nadeem, S. A., You, C., Zhang, X., Levy, S. M., Wang, G., Torner, J. C., & Saha, P. K. (2020). Deep Learning Based High-Resolution Reconstruction of Trabecular Bone Microstructures from Low-Resolution CT Scans using GAN-CIRCLE. Proceedings of SPIE--the International Society for Optical Engineering, 11317, 113170U) in view of Awad (US20090227503A1), Arnaud (US20140093149A1) and Muller (R. Müller, P. Rüegsegger, Three-dimensional finite element modelling of non-invasively assessed trabecular bone structures, Medical Engineering & Physics, Volume 17, Issue 2, 1995, Pages 126-133). Regarding claim 1, Guha teaches a system comprising: a controller configured to: instantiate a neural network in a memory (Section 2.1 Deep Learning Network Architecture teaches the deep learning network architecture developed and Figure 1 shows the architecture GAN-CIRCLE), the neural network having an input layer to receive computed tomography (CT) image data of a bone (Figure 2 shows the input layer within the network and Section 2.2 Dataset Description describes the ankle CT scans input into the network), at least one convolution layer coupled to the input layer (Figure 2 shows the convolution layers) to predict at least one microarchitectural characteristic based on the received CT image data (Figure 4 shows the microstructural features from trabecular bone CT scans predicted by GAN-CIRCLE, described on pg. 7 paragraph 5), and an output layer coupled to the at least one convolution layer (Figure 2 shows the output layer coupled to convolution layers) to output the at least one predicted microarchitectural characteristic and/or output predicted CT image data based on the at least one predicted microarchitectural characteristic (Figure 4 shows the output of microstructural features from trabecular bone CT scans predicted by GAN-CIRCLE); receive reference clinical CT image data of a reference bone having a first bone type (Section 2.2 Dataset Description describes the reference clinical CT image data of the distal tibia of the left legs scanned using a low-resolution Siemens FLASH scanner), the reference clinical CT image data having a first resolution (pg. 6 FLASH scanner describes images were reconstructed at 200 μm slice-spacing using a normal cone beam method with a special U70u kernel); receive reference high-resolution CT image data for the reference bone, (Section 2.2 Dataset Description describes the reference clinical CT image data of the distal tibia of the left legs scanned using a high-resolution Siemens FORCE scanner), the reference high-resolution CT image data having a second resolution, wherein the second resolution is greater than the first resolution (pg. 6 FORCE scanner describes images were reconstructed at 200 μm slice-spacing and 150 μm pixel-size using Siemens’s special kernel Ur77u with Edge Technology); store the reference clinical CT image data and the reference high-resolution CT image data in the memory as a registered/aligned set, wherein each slice of the bone within the reference high-resolution CT image data is associated with one or more corresponding slices of the clinical CT image data ([Section 2.3 Data Processing, Training, and Validation] HR images were registered to corresponding LR images. HR images were registered to the LR image in two-steps: first, cortical bone and Tb network of the HR image was manually registered to the LR image using ITK-SNAP registration toolkit.64 In the second step, a rigid transformation, initialized by the transformation matrix from the manual registration step, was applied on the registered HR image for fine tuning. To improve the registration accuracy, region of interest (ROI) for registration cost function was defined as the distal tibia with a soft boundary. For training and testing purposes, pairs of low- and high-resolution matching patches of size 64×64 were randomly harvested from 30% peeled ROIs of registered LR and HR BMD images, and scaled to the unit interval of [0 1]. [Section 3 Experiments and Results] Specifically, nine thousand matching pairs of low- and high-resolution patches from registered BMD images of ten volunteers were randomly harvested for training and validation of the HR Tb microstructure reconstructor. The set of 9,000 training samples was randomly split into 4:1 ratio for training and validation purposes. A different set of 5,000 pairs of matching LR and HR patches from the registered BMD images of nine other volunteers were used for testing and evaluation); resample the reference clinical CT image based on the reference high-resolution CT image data such that the resampled reference clinical CT image data has a resolution equal to the second resolution (Section 2.3 Data Processing, Training, and Validation describes the low-resolution CT image data being interpolated at 150 μm isotropic voxel size to match the resolution of the high-resolution CT image data); determine at least one microarchitectural characteristic based on the reference high-resolution CT image data (Figure 4 shows the microarchitectural features from high-resolution prediction of low-resolution trabecular bone CT scans based on high-resolution trabecular bone CT scans); wherein the first resolution is in a range of 250 to 100 microns (pg. 6 para. 2 FLASH scanner describes images were reconstructed at 200 μm slice-spacing using a normal cone beam method with a special U70u kernel) and the second resolution (pg. 6 para. 2 FORCE scanner describes images were reconstructed at 200 μm slice-spacing and 150 μm pixel-size using Siemens’s special kernel Ur77u with Edge Technology). Guha does not teach the second resolution is 60 microns or less. Awad, in the same field of endeavor of CT bone imaging, teaches the second resolution is 60 microns or less ([0076] Specimens were scanned at 10.5 micron isotropic resolution using a Scanco VivaCT 40 (Scanco Medical AG, Bassersdorf, Switzerland)). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Awad to use a resolution of 60 microns or less because “Using micro computed tomography (micro-CT), we observed that devitalized allograft remodeling and incorporation into the host remained severely impaired compared to live autografts mainly due to the extent of callus formation around the graft and the rate and extent of the graft resorption” [Awad 0031]. Guha does not teach and perform a fracture risk assessment based on an analysis of the at least one predicted microarchitectural characteristic and/or predicted CT image data. Arnaud, in the same field of endeavor of predicting bone fracture risk, teaches and perform a fracture risk assessment based on an analysis of the at least one predicted microarchitectural characteristic and/or predicted CT image data ([0027] For example, known fracture load can be determined for a variety of subjects and some or all of this database can be used to predict fracture risk by correlating one or more micro-structural parameters or macro-structural parameters (Tables 1 and 2) with data from a reference database of fracture load for age, sex, race, height and weight matched individuals. [0139] The .mu.CT images were analyzed to obtained 3D micro-structural measurements). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Arnaud to perform a fracture risk assessment based on predicted microarchitectural characteristics because “when multiple structural parameters are combined, the prediction of load at which fracture will occur is more accurate. Thus, the analyses of images as described herein can be used to accurately predict musculoskeletal disease such as fracture risk” [Arnaud 0089]. Guha does not teach and wherein: macroarchitectural characteristics for the bone include nodal coordinates of one or more surface vertices of a template mesh when morphed to a target bone; the at least one predicted microarchitectural characteristic is calculated based on spatial measurements including at least one of a thickness of a trabeculae, a spacing between each trabecula within a target cube, or one or more fabric tensor-based variables, wherein the one or more fabric tensor- based variables using a Mean Intercept Length method, a Star Volume Distribution method, and/or a Star Length Distribution method; and the at least one microarchitectural characteristic for the target cube is determined via a grid with a predetermined grid spacing. Muller, in the same field of endeavor of bone microstructure analysis, teaches and wherein: macroarchitectural characteristics for the bone include nodal coordinates of one or more surface vertices of a template mesh when morphed to a target bone (See Fig. 2: Three-dimensional visualization of the surface of the bone structure representing 21 x 20 x 20 voxels (3.4 x 3.4 x 3.4 mm^3). The triangle representation was achieved with the help of an interpolating three-dimensional surface reconstruction algorithm. The interpolation polygons are displayed using Gouraud-shading. [pg. 128] The surface consisted of 7774 interpolated triangles defined by 3872 nodes); the at least one predicted microarchitectural characteristic is calculated based on spatial measurements including at least one of a thickness of a trabeculae, a spacing between each trabecula within a target cube, or one or more fabric tensor-based variables, wherein the one or more fabric tensor- based variables using a Mean Intercept Length method, a Star Volume Distribution method, and/or a Star Length Distribution method ([pg. 127] As a result, a binary volume including isolated trabecular plates and rods was obtained. Because the spatial resolution of the measuring system and the mean trabecular width (100-150 µm) are of the same order, it was not possible to display the true shape of individual trabeculae. It was, however, possible to segment the trabeculae due to their large separation, which was found to be between 500-1000 µm. Figure 8 shows the outputted microarchitectural characteristics); and the at least one microarchitectural characteristic for the target cube is determined via a grid with a predetermined grid spacing ([pg. 130-131] The segmentation and filtration of the VOI resulted in a reduced binary volume of 21 x 20 x 20 voxels (3.6 x 3.4 x 3.4 mm^3) representing trabecular rods and plates located within cancellous bone (Figure 2)…We show the results for two sagittal sections, The locations of section A’-A and section B’-B are illustrated in Figure 7a and 8a respectively. The resulting normal strains in the z-direction and the computed von Mises stresses are plotted in Figure 7b, and 7c for section A’-A and in Figure 8b and 8c for section B’-B respectively). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Muller to output predicted microarchitectural characteristics based on trabeculae thickness and spacing because “The tissue properties on the microstructural level are crucial for the determination of the apparent mechanical properties of the non-invasive bone biopsy on the macrostructural level. Although a lot of research has been performed, to assess for example Young’s modulus of individual trabeculae, a broad range of measured and estimated values are found in literature” [Muller pg. 129]. Regarding claim 4, Guha, Awad, Arnaud, and Muller teach the system of claim 1. Guha does not teach the second resolution is an isotropic resolution of 60 microns or less. Awad teaches the second resolution is an isotropic resolution of 60 microns or less ([0076] Specimens were scanned at 10.5 micron isotropic resolution using a Scanco VivaCT 40 (Scanco Medical AG, Bassersdorf, Switzerland)). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Awad to use an isotropic resolution of 60 microns or less because “Using micro computed tomography (micro-CT), we observed that devitalized allograft remodeling and incorporation into the host remained severely impaired compared to live autografts mainly due to the extent of callus formation around the graft and the rate and extent of the graft resorption” [Awad 0031]. Regarding claim 5, Guha, Awad, Arnaud, and Muller teach the system of claim 1. Guha further teaches wherein the controller is further configured to train the neural network based on inputting the resampled reference clinical CT data into the neural network (Section 2.1 Deep Learning Network Architecture describes training the neural network designed and developed for high-resolution reconstruction of trabecular images capturing trabecular microstructures from their low-resolution CT images) and comparing an output of the neural network to the reference high-resolution CT image data (Section 2.1 Deep Learning Network Architecture describes minimizing the supervision loss (LSUP) and the adversarial loss (LWGAN) which compare resampled clinical CT data to the high-resolution CT data). Regarding claim 8, Guha, Awad, Arnaud, and Muller teach the system of claim 5. Guha teaches the resampled reference clinical CT data (Section 2.3 Data Processing, Training, and Validation describes the low-resolution CT image data being interpolated at 150 μm isotropic voxel size to match the resolution of the high-resolution CT image data). Guha does not teach wherein the controller is further configured to apply the template mesh to the CT data to determine a first plurality of cubes, each cube of the first plurality of cubes being a MxNxY cube of voxels that collectively define physical structure of the reference bone in three-dimensional space. Muller, in the same field of endeavor of bone microstructure analysis, teaches apply the template mesh to the CT data to determine a plurality of cubes, each cube of the plurality of cubes being a MxNxY cube of voxels (Figure 3 shows voxels within a cube during the meshing process) that collectively define physical structure of the reference bone in three-dimensional space (Figure 6 shows three-dimensional meshed bone). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Muller to mesh the CT data to determine cubes comprising voxels “Because of the complex shape of trabecular bone structures…We designed and developed an automatic mesh generator with the aim to create finite element meshes of complex volumetric data very fast and without user interaction. The algorithm is based on the principal idea that the segmented volume describes a discrete data set on a digital raster, allowing to subdivide the volume into small mesh areas defined by the voxel raster itself” [Muller pg. 128 para. 4]. Regarding claim 9, Guha, Awad, Arnaud, and Muller teach the system of claim 8. Guha further teaches wherein the controller is further configured to train the neural network by inputting data into the neural network to cause the neural network to output at least one predicted microarchitectural characteristic for the data (Section 2.1 Deep Learning Network Architecture of Guha describes training the neural network designed and developed for high-resolution reconstruction of trabecular images capturing trabecular microstructures from their low-resolution CT images). Guha does not teach a first plurality of cubes and output at least one predicted microarchitectural characteristic for the first cube. Muller teaches a first plurality of cubes and output at least one predicted microarchitectural characteristic for the first cube (Figures 3 and 6 of show voxels within a plurality of cubes during the meshing process and Figure 8 shows the outputted microarchitectural characteristics). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Muller to use a plurality of cubes to output predicted microarchitectural characteristics because “inverted and non-inverted cube combinations cannot be treated equally (Figure 4)…VOMAC does not only generate elements on the surface but also in the interior of the object” [Muller pg. 128 para. 5] and “The aim of the present work was to introduce a new method to assess and analyse the three-dimensional microstructure of trabecular bone based on non-invasive measurements of excised radii. The method demonstrated a great in vivo potential to predict the apparent mechanical bone properties of anisotropic bone structures non-destructively” [Muller pg. 127 para. 4]. Regarding claim 10, Guha, Awad, Arnaud, and Muller teach the system of claim 8. Guha teaches wherein the controller is further configured to use reference high-resolution CT data to determine second data (Section 2.2 Dataset Description describes the reference clinical CT image data of the distal tibia of the left legs scanned using a high-resolution Siemens FORCE scanner). Guha does not teach to apply the template mesh to determine a second plurality of cubes, each cube of the second plurality of cubes being a MxNxY cube of voxels that collectively define physical structure of the reference bone in three-dimensional space. Muller teaches to apply the template mesh to determine a second plurality of cubes, each cube of the second plurality of cubes being a MxNxY cube of voxels (Figure 3 shows voxels within a cube during the meshing process and Figure 5 shows two cube orientations) that collectively define physical structure of the reference bone in three-dimensional space (Figure 6 shows three-dimensional meshed bone). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Muller to use a second plurality of cubes because "It is only by introducing alternating A- and B-oriented cube configurations that matching tetrahedral interfaces can be ensured" [Muller pg. 129 Fig. 5]. Regarding claim 11, Guha, Awad, Arnaud, and Muller teach the system of claim 10. Guha teaches wherein the controller is further configured to train the neural network based on determining a ground-truth microarchitectural characteristic (Section 2.1 Deep Learning Network Architecture describes using the high-resolution image to determine ground-truth microarchitectural characteristics, shown in Figure 4). Guha does not teach determine a microarchitectural characteristic for a first cube of the second plurality of cubes, the first cube of the second plurality of cubes having anatomical correspondence with the first cube of the first plurality of cubes Muller teaches determine a microarchitectural characteristic for a first cube of the second plurality of cubes, the first cube of the second plurality of cubes having anatomical correspondence with the first cube of the first plurality of cubes (Figure 3 shows voxels within a cube during the meshing process and Figure 5 shows two cube orientations for the same meshed bone structure shown in Figure 6. Figure 8 shows the outputted microarchitectural characteristics for the bone structure). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Muller to use a second plurality of cubes with anatomical correspondence to the first plurality of cubes because "It is only by introducing alternating A- and B-oriented cube configurations that matching tetrahedral interfaces can be ensured" [Muller pg. 129 Fig. 5]. Regarding claim 12, Guha, Awad, Arnaud, and Muller teach the system of claim 11. Guha teaches wherein the controller is further configured to train the neural network by comparing the at least one predicted microarchitectural characteristic to the ground-truth microarchitectural characteristic (Section 2.1 Deep Learning Network Architecture describes minimizing the supervision loss (LSUP) and the adversarial loss (LWGAN) which compare the microarchitectural characteristics of the resampled clinical CT data to the microarchitectural characteristics of the high-resolution CT data) and adjusting one or more connections within the at least one convolution layer based on a difference between the at least one predicted microarchitectural characteristic and the ground-truth microarchitectural characteristic (Section 2.1 Deep Learning Network Architecture describes improving the generator (G) and discriminator (Dy) through training, where convolutional layers in the generator are updated based on the supervision loss (LSUP) and the adversarial loss (LWGAN)). Guha does not teach the first cube of the first plurality of cubes. Muller teaches a first plurality of cubes (Figures 3 and 6 of show voxels within a plurality of cubes). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Muller to use a plurality of cubes because “inverted and non-inverted cube combinations cannot be treated equally (Figure 4)…VOMAC does not only generate elements on the surface but also in the interior of the object” [Muller pg. 128 para. 5]. Regarding claim 13, Guha, Awad, Arnaud, and Muller teach the system of claim 11. Muller teaches wherein each of the second plurality of cubes is associated with at least one microarchitectural characteristic (Figure 3 shows voxels within a cube during the meshing process and Figure 5 shows two cube orientations for the same meshed bone structure shown in Figure 6. Figure 8 shows the outputted microarchitectural characteristics for the bone structure). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Muller to use a second plurality of cubes associated with at least one microarchitectural characteristic because "It is only by introducing alternating A- and B-oriented cube configurations that matching tetrahedral interfaces can be ensured" [Muller pg. 129 Fig. 5] and “The aim of the present work was to introduce a new method to assess and analyse the three-dimensional microstructure of trabecular bone based on non-invasive measurements of excised radii. The method demonstrated a great in vivo potential to predict the apparent mechanical bone properties of anisotropic bone structures non-destructively” [Muller pg. 127 para. 4]. Regarding claim 14, Guha, Awad, Arnaud, and Muller teach the system of claim 1. Guha further teaches wherein: a plurality of registered/aligned sets stored in the memory is associated with a first target bone type; and each of the registered/aligned sets includes the reference clinical CT image of the bone of a first individual that is of the first target bone type and the reference high-resolution CT image data of the bone of the first individual ([Section 2.3 Data Processing, Training, and Validation] HR images were registered to corresponding LR images. HR images were registered to the LR image in two-steps: first, cortical bone and Tb network of the HR image was manually registered to the LR image using ITK-SNAP registration toolkit.64 In the second step, a rigid transformation, initialized by the transformation matrix from the manual registration step, was applied on the registered HR image for fine tuning. To improve the registration accuracy, region of interest (ROI) for registration cost function was defined as the distal tibia with a soft boundary. For training and testing purposes, pairs of low- and high-resolution matching patches of size 64×64 were randomly harvested from 30% peeled ROIs of registered LR and HR BMD images, and scaled to the unit interval of [0 1]. [Section 3 Experiments and Results] Specifically, nine thousand matching pairs of low- and high-resolution patches from registered BMD images of ten volunteers were randomly harvested for training and validation of the HR Tb microstructure reconstructor. The set of 9,000 training samples was randomly split into 4:1 ratio for training and validation purposes. A different set of 5,000 pairs of matching LR and HR patches from the registered BMD images of nine other volunteers were used for testing and evaluation). Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Guha in view of Awad, Arnaud, Muller, and Yang (US20240202885A1). Regarding claim 6, Guha, Awad, Arnaud, and Muller teach the system of claim 5. Guha further teaches wherein the controller is configured to train the neural network based on inputting the resampled reference clinical CT data into the neural network and comparing an output of the neural network to the reference high-resolution CT image data until the neural network has an error rate (Section 2.1 Deep Learning Network Architecture describes minimizing the supervision loss (LSUP) and the adversarial loss (LWGAN)). Guha does not teach an error rate at or below a predetermined error rate. Yang, in the same fields of endeavor of CT image analysis, teaches an error rate at or below a predetermined error rate ([0087] In some embodiments, the first preset loss function threshold may be any reasonable threshold for the difference (e.g., distance, or similarity, etc.) between two groups of material decomposition images or updated material density images determined by two adjacent iterations. In some embodiments, the first preset loss function threshold may be expressed as any reasonable numerical value type, such as percentage (such as 1%, 5%, etc.)). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Yang to set a threshold error rate so that “the one or more target material density images may include material density images output by the trained image processing model when a first iteration termination condition is satisfied” [Yang 0056]. Regarding claim 7, Guha, Awad, Arnaud, Muller, and Yang teach the system of claim 6. Guha does not teach wherein the predetermined error rate is five percent or less. Yang teaches the predetermined error rate is five percent or less ([0087] In some embodiments, the first preset loss function threshold may be any reasonable threshold for the difference (e.g., distance, or similarity, etc.) between two groups of material decomposition images or updated material density images determined by two adjacent iterations. In some embodiments, the first preset loss function threshold may be expressed as any reasonable numerical value type, such as percentage (such as 1%, 5%, etc.)). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Guha with the teachings of Yang to set a threshold error rate of 5% or less so that “the one or more target material density images may include material density images output by the trained image processing model when a first iteration termination condition is satisfied” [Yang 0056]. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Sep 26, 2022
Application Filed
Dec 13, 2024
Non-Final Rejection — §103
Apr 21, 2025
Response Filed
May 20, 2025
Final Rejection — §103
Aug 19, 2025
Request for Continued Examination
Aug 20, 2025
Response after Non-Final Action
Aug 25, 2025
Non-Final Rejection — §103
Dec 01, 2025
Response Filed
Jan 26, 2026
Final Rejection — §103
Apr 08, 2026
Request for Continued Examination
Apr 12, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month