Prosecution Insights
Last updated: April 19, 2026
Application No. 18/061,928

Dynamic Four-Dimensional Contrast Enhanced Tomosynthesis

Final Rejection §103
Filed
Dec 05, 2022
Examiner
SEBASTIAN, KAITLYN E
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Real Time Tomography LLC
OA Round
4 (Final)
73%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
229 granted / 315 resolved
+2.7% vs TC avg
Strong +21% interview lift
Without
With
+20.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
38 currently pending
Career history
353
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
52.3%
+12.3% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 315 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Acknowledgement of Amendment The following office action is in response to the applicant’s amendment filed on 10/27/2025. Claims 1-24 are pending. Claims 1-24 are rejected under 35 U.S.C. 103 for the reasons stated in the Response to Arguments and 35 U.S.C. 103 sections below. Response to Arguments Applicant’s arguments, see Remarks page 8-15, filed 10/27/2025 with respect to the rejection of claims 1-24 under 35 U.S.C. 103 have been fully considered and are not persuasive. Claims 1-2, 4, 7-8, 13-14, 16, 19-20 and 24 The Office Action does not Respond to Applicant’s Arguments: Applicant respectfully submits that the examiner's remarks in the present Office Action do not substantively address the arguments set forth in Applicant's previous response. Rather than addressing the specific technical and legal points raised-particularly regarding the incompatibility of Bernard and Ullberg and the absence of key claim limitations in the cited art-the Office Action merely repeats prior conclusions. For example, in Applicant's last response, Applicant specifically argued that Ullberg's system is fundamentally incompatible with Bernard's because Ullberg only produces one- and two-dimensional images, whereas the claimed invention requires construction of four- dimensional image data (three spatial dimensions plus time) from subsets of projections at each time-point. Applicant explained that Ullberg's line detector approach cannot be simply integrated into Bernard's tomosynthesis system to achieve the claimed dynamic, time-resolved imaging. In particular, the asserted two-dimensional images of Ullberg cannot be used to generate four- dimensional images by Bernard's system. The Response to Arguments section in the current Office Action, however, does not address this incompatibility or provide any technical reasoning as to how Ullberg's system could be modified to produce the required four-dimensional data. Instead, the Office Action provides a copy of Applicant's argument and then states that "the two-dimensional images obtained by the system of Ullberg can be utilized in the system of Bernard to generate three-dimensional images and "construct four dimensional image data," without citing any specific teaching or suggestion in the references that would enable such a combination. It is well-settled that "the examiner must answer every material point raised by the applicant." MPEP § 707.07(f) (emphasis added). The pending Office Action, however, does not meet this requirement - and because the office's failure to respond to Applicant's arguments is contrary to the MPEP, the office should withdraw all rejections for this reason alone. The examiner respectfully disagrees that the Office Action of 07/28/2025 does not respond to the Applicant’s argument and respectfully refers the Applicant to pages 4-7 of the non-final rejection of 07/28/2025. However, to be clear, with respect to the Applicant’s argument that the Ullberg's system is fundamentally incompatible with Bernard's because Ullberg only produces one- and two-dimensional images, the examiner respectfully disagrees with this point. As stated previously (Page 6, of non-final rejection of 07/28/2025), “Bernard also obtains two-dimensional images which are utilized to generate a three-dimensional reconstructed volume” (See Bernard” [Abstract]) and further four dimensional image data (see Bernard: [0021]; [0030]: “The exemplary software is also operable to use two-or three dimensional MRI, CT, and/of X-ray acquired image data generated by the imaging system 25 to build a digitized three- or four-dimensional anatomical roadmap or model of a patient’s anatomy, and electromagnetic (EM) tracking technology that operates as a type of global-type positioning system to show a real-time spatial relation or location of the tool 105, as illustrated with a representation 120 (a cursor, triangle, square, cross-hairs, etc.), relative to the anatomical roadmap”).”. The examiner acknowledges that the system of Ullberg does produce one and two-dimensional images, however the examiner respectfully asserts that Bernard also produces and utilizes two-dimensional images. Specifically, Bernard utilizes two-dimensional images to generate a three-dimensional reconstructed volume (see Bernard: [Abstract]: “The controller includes a processor to perform program instructions representative of the steps of generating a three-dimensional reconstructed volume from the plurality of two-dimensional, radiography images”), and additionally, reconstructs a four-dimensional anatomical roadmap or model of a patient’s anatomy (see Bernard: [0030]). Thus, Bernard’s system must obtain two-dimensional radiography images in order to generate a three-dimensional reconstructed volume. Therefore, since Bernard utilizes two-dimensional images to generate a three-dimensional reconstruction and consequently a four-dimensional anatomical map (see Bernard: [0030]), and the system of Ullberg obtains two-dimensional images, the examiner respectfully asserts that system of Ullberg is not “fundamentally incompatible” with Bernard's because they both obtain and utilize two-dimensional images for use in tomosynthesis imaging (see Bernard: [0035]: “A term used to refer herein to this technique is tomosynthesis. All or a portion of the acquired image data or frames (e.g., two-dimensional radiological image frames) can be used during this tomosynthesis reconstruction to generate the digital, three-dimensional reconstructed volume 212 of the imaged anatomy (e.g., breast tissue)” and Ullberg: [0011]: “Thus, a plurality of two-dimensional images at different angles are produced in a single scan, which reduces the detection time by a factor corresponding to the number of two-dimensional images produced. The data from the apparatus is excellent to be used in tomosynthesis or laminographic imaging”.). Additionally, the examiner notes that the line detector approach of Ullberg can be integrated into Bernard’s tomosynthesis system to achieve the claimed dynamic, time-resolved imaging. The examiner respectfully maintains that “Incorporating the line detectors 6a of Ullberg into the image receptor 40 of Bernard would enable the image receptor 40 to acquire a plurality of two-dimensional images at different angles in a single scan (see Ullberg: [0011])” (see non-final rejection of 07/28/2025 on page 5). Furthermore, the examiner respectfully notes, that it would be obvious to utilize the line detectors 6a of Ullberg to perform scanning over time (i.e. time-resolved imaging). Specifically, Ullberg discloses, “During scanning the device 7 moves the radiation source 1 and the radiation detector 6 relative the object 5 in a linear manner parallel with the front of the radiation detector as being indicated by arrow 8, while each of the line detectors 6a records a plurality of line images of radiation as transmitted through the object 5 in a respective line of the different angles” [0019]. Therefore, during the scan, the device 7 is linearly moved such that the line detectors 6a record a plurality of line images at each location of the device 7, each at a respective time during the scan. Furthermore, the Applicant argues that the Office Action does not provide any technical reasoning as to how Ullberg’s system could be modified to produce the required four-dimensional data. The examiner respectfully notes that it is not the system of Ullberg that is being modified in the combination of Bernard and Ullberg. Rather the system of Bernard is being modified to incorporate the line detectors 6a of Ullberg which are configured to acquire a plurality of projection images at each time-point (see Ullberg: [0011]). It is the system of Bernard which utilizes the two-dimensional images to generate a three-dimensional images and subsequently a four-dimensional anatomical roadmap (see Bernard: [0030]). The examiner respectfully asserts that Bernard paragraph [0030] and [Abstract]: (“The controller includes a processor to perform program instructions representative of the steps of generating a three-dimensional reconstructed volume from the plurality of two-dimensional, radiography images”) represents a specific teaching which utilized two-dimensional images (i.e. obtained by the line detectors 6a of Ullberg, for example), to generate three-dimensional images and “construct four dimensional image data”. Additionally, the examiner respectfully notes that the claimed invention requires construction of four-dimensional image data (three spatial dimensions plus time) from subsets of projections at each time point. However, the claim as written does not state whether the “subsets of projections” need to be three-dimensional projections. Therefore, under broadest reasonable interpretation, subsets of two-dimensional images can be acquired for use in constructing four dimensional image data (i.e. by first combining the two-dimensional image data to form three-dimensional image data). 2. Application of Bernard in view of Ullberg Constitutes Improper Hindsight Reconstruction The Office Action's combination of Bernard in view of Ullberg is improper because the combination relies on an unsupported assumption that Ullberg's two-dimensional images can be used within Bernard's tomosynthesis system to construct four-dimensional data. For this further reason, the office should withdraw all rejections. More specifically, Bernard's system acquires full-field projection images at multiple angles and time-points, reconstructing three- or four- dimensional volumes for surgical guidance using algorithms tailored for time-resolved datasets. Bernard, at [0020]-[0021]. Ullberg, on the other hand, employs a stack of line detectors to simultaneously acquire two-dimensional images during a static scan, producing data that is fundamentally different from Bernard's in both format and intended use. Ullberg, at [0011]. Because of this fundamental difference between the references, integrating Ullberg's line detector approach into Bernard's workflow would require a complete redesign of the acquisition hardware, synchronization mechanisms, and reconstruction algorithms, as the two systems operate on incompatible principles. Bernard's system is built for dynamic, volumetric imaging, while Ullberg's system is limited to static, angle-specific slices. Such a combination would not only be impractical but would also fundamentally alter the principle of operation of both systems: Bernard would lose its dynamic imaging capability and Ullberg would need to be transformed into a system capable of time-resolved acquisition and volumetric reconstruction capabilities not taught or suggested in either reference. The combination of references therefore relies on improper hindsight, as forbidden by MPEP § 2143.01, because the combination reconstructs the claimed invention using knowledge from Applicant's own disclosure rather than from the actual teachings of Bernard and Ullberg. The references themselves provide no motivation, teaching, or suggestion to combine their respective systems in a way that would yield the claimed dynamic, subset-based three- or four- dimensional tomosynthesis. The only way to arrive at the claimed invention is by relying on Applicant's specification to fill gaps in the prior art, and the MPEP expressly prohibits this approach. For these reasons, the office should withdraw the rejections. The Applicant respectfully disagrees that the combination of Bernard in view of Ullberg is improper because the combination relies on an unsupported assumption that Ullberg’s two-dimensional images can be used within Bernard’s tomosynthesis system to construct four-dimensional data. Bernard itself discloses, in the [Abstract] that “The controller includes a processor to perform program instructions representative of the steps of generating a three-dimensional reconstructed volume from the plurality of two-dimensional, radiography images” and paragraph [0009]: “The method includes the steps of acquiring a plurality of two-dimensional, radiography images of an imaged subject; generating a three-dimensional reconstructed volume from the plurality of two-dimensional, radiography images”. In order to generate the three-dimensional reconstructed volume, the plurality of two-dimensional images had to have been obtained by the imaging system. Once these three-dimensional images are generated the imaging system 25 of Bernard utilizes the images to build a digitized four-dimensional anatomical roadmap (see [0030]). The examiner respectfully agrees that Bernard system acquires full-field projection images at multiple angles and time-points, reconstructing three- or four- dimensional volumes for surgical guidance using algorithms tailored for time-resolved datasets. Bernard, at [0020]-[0021]. However, while the examiner notes that Ullberg employs a stack of line detectors to simultaneously acquire two-dimensional images, the examiner disagrees with the Applicant’s argument that Ullberg only is limited to static, angle-specific slices (see Ullberg: [0019]). Specifically, Ullberg discloses “During scanning the device 7 moves the radiation source 1 and the radiation detector 6 relative the object 5 in a linear manner parallel with the front of the radiation detector as being indicated by arrow 8, while each of the line detectors 6a records a plurality of line images of radiation as transmitted through the object 5 in a respective line of the different angles” [0019]. Therefore, the scan performed by the system of Ullberg is not static and each of the line detectors 6a obtain an image from a different angle. Even if the intended use of the data produced by the system of Ullberg is different from that of Bernard, that does not necessarily mean that the two-dimensional data from Ullberg cannot be utilized within the system of Bernard as both systems acquire two-dimensional image data. The examiner respectfully disagrees that integrating Ullberg's line detector approach into Bernard's workflow would require a complete redesign of the acquisition hardware, synchronization mechanisms, and reconstruction algorithms, as the two systems operate on different principles. Specifically, the examiner notes that the imaging system 25 of Bernard includes a gantry 50 “constructed in mobile support of the energy source 35 and image receptor 40 in relation to the imaged subject 30 […] The image receptor 40 is coupled to the mobile arm 70 and positioned opposite the energy source 35 in the direction of emission so as to receive energy beam 48 […] the mobile arm 70 is operable to simultaneously move the energy source 35 and the image receptor 40 in relation to the imaged subject 22” [0023]. Therefore, the imaging system 25 of Bernard moves the image receptor 40 and energy source 35 along the patient in the same fashion as the scanning performed by the system of Ullberg (see Ullberg: [0019]). Furthermore, Bernard discloses that “Examples of the image receptor 40 include x-ray image intensifier tube, solid state detector, gaseous detector, or any type of detector which transforms incident x-ray photons either into a digital image or into another form which can be made into a digital image by further transformations” [0022]. The examiner respectfully asserts that the line detectors 6a of Ullberg represents detectors which transform incident x-ray photons (i.e. from X-ray source 1 in Ullberg or energy source 35 of Bernard, for example) into a digital image (see Ullberg: [0022]: As can be seen in FIGS. 2a-c, each line detector/x-ray bundle pair produces a compete two-dimensional image at a distinct one of the different angles”). Thus, integrating Ullberg's line detector approach into Bernard's workflow would not require a complete redesign of the acquisition hardware (i.e. simply substituting the line detectors 6a Ullberg into the image receptor 40 of Bernard), synchronization mechanisms (i.e. the systems of Bernard and Ullberg move the detector along the patient in the same fashion, see Bernard: [0023] and Ullberg: [0019]), and reconstruction algorithms (i.e. use processor to reconstruct two-dimensional images which are used to generate three-dimensional images for use in creating the four-dimensional anatomical roadmap, see Bernard: [0030]). Therefore, the examiner respectfully asserts that incorporating the line detectors 6a of Ullberg into the image receptor 40 of Bernard would not be impractical and would not fundamentally alter the principle of operation of both systems because Bernard would not lose its dynamic imaging capability (i.e. movement as described in Bernard: [0023]) and Ullberg would not need to be transformed into a system capable of time-resolved acquisition and volumetric reconstruction (i.e. because Ullberg is not relied upon for teaching time-resolved acquisition or volumetric reconstruction, rather it is relied of to teach “wherein at each time-point, a plurality of projection images are acquired”). 3. The Claims Contain Features Not Disclosed or Rendered Obvious by Bernard in view of Ullberg Applicant further submits that the following features of independent claims 1, 13, and 24 are not taught or disclosed by Bernard in view of Ullberg: - Acquiring, while a surgical instrument is inserted into an object and at a plurality of time- points, a series of projection images of the object. - Each [projection image] associated with a corresponding time-point of a plurality of time- points. - Wherein the plurality of projection images corresponds to a subset of angles less than the plurality of angles. - Constructing of four-dimensional image data comprising a representation of the object from each of the plurality of angles for each of the plurality of time-points. First, the Applicant argues that the office does not identify disclosure in either reference that teaches or suggests "acquiring, while a surgical instrument is inserted into an object and at a plurality of time-points, a series of projection images of the object". Bernard does not disclose acquiring projection images while a surgical instrument is inserted into an object at a plurality of time-points; instead, Bernard describes acquiring images for tomosynthesis generally, without the specific requirement that the surgical instrument is present and that acquisition occurs at multiple time-points. Ullberg is similarly silent on acquiring images in the presence of a surgical instrument or at multiple time-points, as Ullberg's disclosure is limited to static imaging. The examiner respectfully disagrees that Bernard does not teach "acquiring, while a surgical instrument is inserted into an object and at a plurality of time-points, a series of projection images of the object". Specifically, Bernard discloses “The system 20 further includes a navigation system 100 operable to track movement and/or locate a tool 105 traveling through the imaged subject 22. An embodiment of the tool 105 includes surgical tool, navigational tool, a guidewire, a catheter, an endoscopic tool, a laparoscopic tool, ultrasound probe, pointer, aspirator, coil, or the like employed in a medical procedure” [0028]; “An embodiment of the navigation system 100 is generally operable to track or detect a position of the tool 105 relative to the at least one acquired projection image or three-dimensional reconstructed model generated by the imaging system 115” [0029]. “According to yet another example, having detected or tracked a location of the tool 105 through the imaged subject 22 via the tracking system 100, and registering the location of the tool 105 relative to the spatial relation of the three-dimensional, reconstructed volume 212 or the VOI 228 or the overview image 285 such that the controller 130 is operable to calculate the location of the one or more image elements that constitute the three-dimensional, reconstructed volume 212 or the VOI 228 or the overview image 285 correlating to the location of the tool 105, step 320 includes calculating and generating the two-dimensional display 322 of the portion of interest of the three-dimensional, reconstructed volume 212 or the VOI 228 or the overview image 285 that is correlated or dependent or centered at the location of the tool 105” [0061]. The examiner respectfully asserts that in order to track (i.e. over time) the tool 105 (i.e. surgical instrument) with the tracking system 100 and calculate/generate the two-dimensional display of the portion of interest of the three-dimensional, reconstructed volume 212 centered at the location of the tool 105, the method carried out by the system performs the step of “acquiring, while a surgical instrument is inserted into an object and at a plurality of time-points, a series of projection images of the object”. Second, the Applicant argues that the office does not identify disclosure in either reference that teaches or suggests "each [projection image] associated with a corresponding time-point of a plurality of time-points". Bernard does not teach associating each projection image with a specific time-point in a series of time-points, nor does Ullberg. Ullberg's system acquires multiple projection images at different angles, but these are not associated with different time-points. Rather, they are acquired in a single scan at a single time. The examiner respectfully disagrees that Bernard does not teach “each [projection image] associated with a corresponding time-point of a plurality of time-points". Specifically, Bernard discloses “Step 210 includes generating a digital, three-dimensional, reconstructed model or volume 212 of the imaged anatomy. An embodiment of the three-dimensional, reconstructed volume 212 is created by applying a back-projection reconstruction algorithm to the acquired image data or any other 3D reconstruction algorithm, generating a series of slice planes 214, 216, 218 of image data in succession relative to one another. A term used to refer herein to this technique is tomosynthesis. All or a portion of the acquired image data or frames (e.g., two-dimensional radiological image frames) can be used during this tomosynthesis reconstruction to generate the digital, three-dimensional reconstructed volume 212 of the imaged anatomy (e.g., breast tissue)” [0035]. In this case, since the slice planes 214, 216, 218 (i.e. projection images) are obtained in succession (i.e. over time) relative to each other and are used to generate the three-dimensional image (see [0035]), these slice planes 214, 216, and 218 are each acquired at a corresponding time-point. Therefore, the examiner respectfully maintains that Bernard teaches that each projection image is associated with a corresponding time-point of a plurality of time-points. Third, the Applicant argues that the office does not identify disclosure in either reference that teaches or suggests "wherein the plurality of projection images corresponds to a subset of angles less than the plurality of angles". Bernard does not disclose that, at each time-point, only a subset of angles is used to acquire projection images, nor does Ullberg. Ullberg's system may acquire images at multiple angles simultaneously, but the reference does not teach or suggest that, at each of a plurality of time-points, only a subset of the total available angles is used. The examiner respectfully disagrees that Bernard does not teach “wherein the plurality of projection images corresponds to a subset of angles less than the plurality of angles”. Specifically, Bernard discloses “Step 205 includes acquiring a plurality of projected images (P1 through Pn) at a plurality of directions or angles (D1 through Dn) of the anatomical area of interest (e.g., the breast tissue of the imaged subject 22” [0034]; “According to another embodiment, step 300 includes receiving an instruction (e.g., click of a mouse device 145 at the bounding surface or marker 226) that identifies the VOI 228 and in response calculating and communicating one of a slice 214, 216 or 218 from the sequential succession of slices 214, 216, 218 most centrally located in spatial relation to the VOI 228 constituting thereof, or a subset of the succession of slices 214, 216, 218 that are located between an identified minimum and maximum coordinate value of the VOI 228, a slab representative of an average of the successive series of slices 214, 216, 218, a three dimensional display of the VOI 228” [0056]; “For example, in response to receiving the instruction from the input device 145, step 320 can include automatically calculating or selecting the slice 218 from the succession of slices 214, 216, 218 along the ray path 292 of the selected image element (e.g., voxel, pixel, etc.) identified per an instruction (e.g., click of a mouse device) generated via the input device 145“ [0059]. In this case, each of the slices 214, 216 and 218 (i.e. projection images) are associated with respective angles (See [0034]) which are centrally located in a spatial relation to the VOI 228. In order for the subset of the succession of slices 214, 216, 218 to be centrally located between an identified minimum and maximum coordinate value of the VOI 228, the plurality of projection images must correspond to a subset of angles less than the plurality of angles (i.e. the angles covering the entire VOI 228). Therefore, the examiner respectfully maintains that Bernard teaches “wherein the plurality of projection images corresponds to a subset of angles less than the plurality of angles”. Fourth, the Applicant argues that the office does not identify disclosure in either reference that teaches or suggests "constructing of four-dimensional image data comprising a representation of the object from each of the plurality of angles for each of the plurality of time-points". Bernard mentions a "four-dimensional anatomical roadmap," but does not teach constructing four-dimensional image data as claimed, i.e., a representation of the object from each of the plurality of angles for each of the plurality of time-points. Ullberg does not disclose constructing four-dimensional image data at all. Accordingly, the Applicant argues that the specific features recited in independent claims 1, 13, and 24 are not taught or disclosed by Bernard in view of Ullberg. The examiner respectfully disagrees that Bernard does not teach "constructing of four-dimensional image data comprising a representation of the object from each of the plurality of angles for each of the plurality of time-points". Specifically, Bernard discloses “The imaging system 25 is generally operable to generate a two-dimensional, three-dimensional, or four-dimensional image data corresponding to an area of interest of the imaged subject 110” [0021] and “The exemplary software is also operable to use two- or three-dimensional MRI, CT and/or X-ray acquired image data generated by the imaging system 25 to build a digitized three-, or four-dimensional anatomical roadmap or model of a patient's anatomy” [0030]. In this case, the four-dimensional image/anatomical roadmap is built from the acquired image data, the acquired image data including a series of slice planes 214, 216, 218 (See [0035]) acquired at a plurality of angles (i.e. see [0034]) for each of a plurality of time-points (i.e. the slice planes are each acquired at a different time-points because they are acquired in succession, see [0035]). Therefore, the examiner respectfully maintains the rejection of claims 1, 13 and 24 for the reasons stated above. B. Claims 3 and 15 Claims 3 and 15 stand rejected under 35 U.S.C. § 103 as being unpatentable over Bernard in view of Ullberg and further in view of U.S. Publication No. 2012/0022401 (hereinafter "Fischer"). Office Action, at 19. With respect, the office should withdraw these rejections. As explained above, independent claim 1 is not rendered obvious over Bernard in view of Ullberg. The office's further application of Fischer to dependent claims 3 and 15 does not cure the deficiencies of Bernard in view of Ullberg as applied to independent claim 1, as the office does not establish that a person of ordinary skill in the art who begins with Bernard in view of Ullberg and then reads Fischer would have any particular motivation to make the necessary modifications to the cited references to arrive at independent claim 1. Thus, because claim 1 is nonobvious and because claims 3 and 15 depend from independent claim 1, Applicant requests reconsideration and withdrawal of the 35 U.S.C. § 103 rejections of claims 3 and 15. See In re Fine, 837 F.2d 1071, 1076 (Fed. Cir. 1988) ("Dependent claims are nonobvious under section 103 if the independent claims from which they depend are nonobvious."). The examiner respectfully maintains that Fischer was referred to in order to teach the limitations of claims 3 and 15 for the reasons stated in the 35 U.S.C. 103 section below. Thus, the rejection of claims 3 and 15 under 35 U.S.C. 103 is respectfully maintained. C. Claims 5 and 17 Claims 5 and 17 stand rejected under 35 U.S.C. § 103 as being unpatentable over Bernard in view of Ullberg and further in view of U.S. Publication No. 2005/0075563 (hereinafter "Sukovic"). Office Action, at 20. With respect, the office should withdraw these rejections. As explained above, independent claims 1 and 13 are not rendered obvious over Bernard in view of Ullberg. The office's further application of Sukovic to dependent claims 5 and 17 does not cure the deficiencies of Bernard in view of Ullberg as applied to independent claims 1 and 13, as the office does not establish that a person of ordinary skill in the art who begins with Bernard in view of Ullberg and then reads Sukovic would have any particular motivation to make the necessary modifications to the cited references to arrive at independent claims 1 and 13. Thus, because claims 1 and 13 are nonobvious and because claims 5 and 17 depend from independent claims 1 and 13, respectively, Applicant requests reconsideration and withdrawal of the 35 U.S.C. § 103 rejections of claims 5 and 17. See In re Fine, 837 F.2d 1071, 1076 (Fed. Cir. 1988) ("Dependent claims are nonobvious under section 103 if the independent claims from which they depend are nonobvious."). The examiner respectfully maintains that Sukovic was referred to in order to teach the limitations of claims 5 and 17 for the reasons stated in the 35 U.S.C. 103 section below. Thus, the rejection of claims 5 and 17 under 35 U.S.C. 103 is respectfully maintained. D. Claims 6 and 18 Claims 6 and 18 stand rejected under 35 U.S.C. § 103 as being unpatentable over Bernard in view of Ullberg and further in view of U.S. Publication No. 2013/0336450 (hereinafter "Kyriakou"). Office Action, at 22. With respect, the office should withdraw these rejections. As explained above, independent claims 1 and 13 are rendered obvious over Bernard in view of Ullberg. The office's further application of Kyriakou to dependent claims 6 and 18 does not cure the deficiencies of Bernard in view of Ullberg as applied to independent claims 1 and 13, as the office does not establish that a person of ordinary skill in the art who begins with Bernard in view of Ullberg and then reads Kyriakou would have any particular motivation to make the necessary modifications to the cited art to arrive at independent claims 1 and 13. Thus, because claims 1 and 13 are nonobvious and because claims 6 and 18 depend from independent claims 1 and 13, respectively, Applicant requests reconsideration and withdrawal of the 35 U.S.C. § 103 rejections of claims 6 and 18. See In re Fine, 837 F.2d 1071, 1076 (Fed. Cir. 1988) ("Dependent claims are nonobvious under section 103 if the independent claims from which they depend are nonobvious."). The examiner respectfully maintains that Kyriakou was referred to in order to teach the limitations of claims 6 and 18 for the reasons stated in the 35 U.S.C. 103 section below. Thus, the rejection of claims 6 and 18 under 35 U.S.C. 103 is respectfully maintained. E. Claims 9-10 and 21-22 Claims 9-10 and 21-22 stand rejected under 35 U.S.C. § 103 as being unpatentable over Bernard in view of Ullberg and further in view of U.S. Publication No. 2008/0240533 (hereinafter "Piron"). Office Action, at 23. With respect, the office should withdraw these rejections. As explained above, independent claims 1 and 13 are rendered obvious over Bernard in view of Ullberg. The office's further application of Piron to dependent claims 9-10 and 21-22 does not cure the deficiencies of Bernard as applied to independent claims 1 and 13, as the office does not establish that a person of ordinary skill in the art who begins with Bernard in view of Ullberg and then reads Piron would have any particular motivation to make the necessary modifications to the cited references to arrive at independent claims 1 and 13. Thus, because claims 1 and 13 are nonobvious and because claims 9-10 and 21-22 depend from independent claims 1 and 13, respectively, Applicant requests reconsideration and withdrawal of the 35 U.S.C. § 103 rejections of claims 9-10 and 21-22. See In re Fine, 837 F.2d 1071, 1076 (Fed. Cir. 1988) ("Dependent claims are nonobvious under section 103 if the independent claims from which they depend are nonobvious."). The examiner respectfully maintains that Piron was referred to in order to teach the limitations of claims 9-10 and 21-22 for the reasons stated in the 35 U.S.C. 103 section below. Thus, the rejection of claims 9-10 and 21-22 under 35 U.S.C. 103 is respectfully maintained. F. Claims 11 and 23 Claims 11 and 23 stand rejected under 35 U.S.C. § 103 as being unpatentable over Bernard in view of Ullberg and further in view of U.S. Publication No. 2006/0269114 (hereinafter "Metz"). Office Action, at 28. With respect, the office should withdraw these rejections. As explained above, independent claims 1 and 13 are rendered obvious over Bernard in view of Ullberg. The office's further application of Metz to dependent claims 11 and 23 does not cure the deficiencies of Bernard in view of Ullberg as applied to independent claims 1 and Page 13, as the office does not establish that a person of ordinary skill in the art who begins with Bernard in view of Ullberg and then reads Metz would have any particular motivation to make the necessary modifications to the cited references to arrive at independent claims 1 and 13. Thus, because claims 1 and 13 are nonobvious and because claims 11 and 23 depend from independent claims 1 and 13, respectively, Applicant requests reconsideration and withdrawal of the 35 U.S.C. § 103 rejections of claims 11 and 23. See In re Fine, 837 F.2d 1071, 1076 (Fed. Cir. 1988) ("Dependent claims are nonobvious under section 103 if the independent claims from which they depend are nonobvious."). The examiner respectfully maintains that Metz was referred to in order to teach the limitations of claims 11 and 23 for the reasons stated in the 35 U.S.C. 103 section below. Thus, the rejection of claims 11 and 23 under 35 U.S.C. 103 is respectfully maintained. G. Claim 12 Claim 12 stands rejected under 35 U.S.C. § 103 as being unpatentable over Bernard in view of Ullberg and further in view of U.S. Publication No. 2005/0063611 (hereinafter "Toki"). Office Action, at 30. With respect, the office should withdraw these rejections. As explained above, independent claim 1 is not rendered obvious over Bernard in view of Ullberg. The office's further application of Toki to dependent claim 12 does not cure the deficiencies of Bernard in view of Ullberg as applied to independent claim 1, as the office does not establish that a person of ordinary skill in the art who begins with Bernard in view of Ullberg and then reads Toki would make the necessary modifications to the cited references to arrive at independent claim 1. Thus, because claim 1 is nonobvious and because claim 12 depends from independent claim 1, Applicant requests reconsideration and withdrawal of the 35 U.S.C. § 103 rejection of claim 6. See In re Fine, 837 F.2d 1071, 1076 (Fed. Cir. 1988) ("Dependent claims are nonobvious under section 103 if the independent claims from which they depend are nonobvious."). The examiner respectfully maintains that Toki was referred to in order to teach the limitations of claim 12 for the reasons stated in the 35 U.S.C. 103 section below. Thus, the rejection of claim 12 under 35 U.S.C. 103 is respectfully maintained. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2, 4, 7-8, 13-14, 16, 19-20, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable by Bernard et al. US 2009/0080765 A1 “Bernard” and further in view of Ullberg US 2005/0008124 A1 “Ullberg”. Regarding claims 1, 13 and 24, Bernard teaches “A digital tomosynthesis method for guided use of a surgical instrument, the method comprising:” (Claim 1) (“FIG. 2 illustrates a flow diagram of an embodiment of a method to automatically generate a selected visualization of an anatomical region of interest of an imaged subject using the system illustrated in FIG. 1” [0013]; “Step 210 includes generating a digital three-dimensional, reconstructed model or volume 212 of the imaged anatomy […] generating a series of slice planes 214, 216, 218 of image data in succession relative to one another. A term used to refer herein to this technique is tomosynthesis. All or a portion of the acquired image data or frames (e.g., two-dimensional radiological image frames) can be used during this tomosynthesis reconstruction to generate the digital, three-dimensional reconstructed volume 212 of the imaged anatomy (e.g., breast tissue)” [0035]. Therefore, the method shown in FIG. 2 represents a digital tomosynthesis method. Furthermore, regarding guided use of a surgical instrument, Bernard discloses “In accordance with one embodiment, one of the tracking elements 110 or 115 is attached at the tool 105 being tracked traveling through the imaged subject 22. […] The navigation system 125 is operable to track movement of the object 105 in accordance to known mathematical algorithms […] The exemplary software is also operable to use two- or three-dimensional MRI, CT and/or X-ray acquired image data generated by the imaging system 25 to build a digitized three- or four-dimensional anatomical roadmap or model of a patient’s anatomy and electromagnetic (EM) tracking technology that operates as a type of global-type positioning system to show a real-time spatial relation or location of the tool 105, as illustrated with a representation 120 (a cursor, triangle, square, cross-hairs, etc.), relative to the anatomical roadmap” [0030]. In this case, the tool 105 “includes [a] surgical tool, [a] navigational tool, a guidewire, a catheter, an endoscopic tool, a laparoscopic tool, [an] ultrasound probe, [a] pointer, [an] aspirator, [a] coil, or the like employed in a medical procedure” [0028]. Additionally, tool 105 is within the system 20 shown in FIG. 1. Therefore, the method shown in FIG. 2, which is carried out by the system 20 of FIG. 1 (see [0033]), is used to guide a surgical instrument (i.e. via a digitized three-or four-dimensional anatomical roadmap showing a real-time spatial relation or location of the tool 105, see [0030]). “A digital tomosynthesis device for guided use of a surgical instrument, the device comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the device to:” (Claim 13) and “A digital tomosynthesis system for guided use of a surgical instrument, the system comprising: a surgical instrument; and a computing device configured to:” (Claim 24) (See [0028] and [0030] above and “FIG. 1 illustrates an embodiment of a system 20 operable to generate a selected visualization of a region of interest of an imaged subject 22. The system 20 generally includes an imaging system 25 operable to acquire multiple different views of anatomical images of the imaged subject 22” [0020]; “The system 20 also includes a controller 130 connected in communication with the imaging system 25 and the navigation system 100. The controller 130 generally includes a processor 135 in communication in a conventional manner with a memory 140. The memory 140 generally includes a data memory and a program memory 140 configured to store computer readable program instructions to be executed by the processor 135” [0031]. Therefore, the system shown in FIG. 1 includes a digital tomosynthesis device for guided use of a surgical instrument (i.e. tool 105), the device comprising one or more processors (i.e. processor 135) and a memory (i.e. memory 140) storing instructions that are executed by the one or more processors. Furthermore, the digital tomosynthesis system (i.e. system 20, shown in FIG. 1) includes a surgical instrument (i.e. tool 105) and a computing device (i.e. processor 135).); “acquire(ing), while a surgical instrument is inserted into an object and at a plurality of time-points, a series of projection images of the object, each of the projection images being associated with a corresponding time-point of a plurality of time-points, and the series of projection images being acquired over the plurality of time-points and at a plurality of angles, […] wherein the plurality of projection images corresponds to a subset of angles less than the plurality of angles” (Claims 1, 13 and 24) (“The imaging system 25 is generally operable to generate a two-dimensional, three-dimensional, or four-dimensional image data corresponding to an area of interest of the images subject 110” [0021]; “Step 205 includes acquiring a plurality of projected images (P1 through Pn) at a plurality of directions or angles (D1 through Dn) of the anatomical area of interest (e.g., the breast tissue) of the imaged subject 22” [0034]; “In accordance with one embodiment, one of the tracking elements 110 or 115 is attached at the tool 105 being tracked traveling through the imaged subject 22 […] The exemplary software is also operable to use two- or three-dimensional MRI, CT and/or X-ray acquired image data generated by the imaging system 25 to build a digitized three- or four-dimensional anatomical roadmap or model of a patient’s anatomy […] to show a real-time spatial relation or location of the tool 105, as illustrated with a representation 120 (a cursor, triangle, square, cross-hairs, etc.), relative to the anatomical roadmap” [0030]; “Step 210 includes generating a digital, three-dimensional, reconstructed model or volume 212 of the imaged anatomy. An embodiment of the three-dimensional, reconstructed volume 212 is created by applying a back-projection reconstruction algorithm to the acquired image data or any other 3D reconstruction algorithm, generating a series of slice planes 214, 216, 218 of image data in succession relative to one another. A term used to refer herein to this technique is tomosynthesis. All or a portion of the acquired image data or frames (e.g., two-dimensional radiological image frames) can be used during this tomosynthesis reconstruction to generate the digital, three-dimensional reconstructed volume 212 of the imaged anatomy (e.g., breast tissue)” [0035]. In this case, each of the acquired images (i.e. projected images P1 through Pn), is associated with a corresponding angle (i.e. D1 to Dn). Since a portion of the acquired image data or frames (i.e. a subset) can be used during tomosynthesis reconstruction, this would imply that a subset of angles less than the plurality of angles are used to generate the three-dimensional reconstructed volume 212. Therefore, the method, carried out by the system, involves acquiring, while a surgical instrument is inserted into an object, and at a plurality of time points, a series of projection images of the object, each of the projection images (i.e. P1 through Pn) being associated with a corresponding time-point of a plurality of time-points, and the series of projection images being acquired over the plurality of time-points and at a plurality of angles (i.e. D1 through Dn), wherein the plurality of projection images corresponds to a subset of angles less than the plurality of angles.). “construct(ing) four dimensional image data from the series of projection images, wherein the four dimensional image data comprises a representation of the object from each of the plurality of angles for each of the plurality of time-points” (Claims 1, 13 and 24) (See [0030]: “to build a digitized three-, or four-dimensional anatomical roadmap […] to show a real-time spatial relation or location of the tool 105, as illustrated with a representation 120 (a cursor, triangle, square, cross-hairs, etc.), relative to the anatomical roadmap”. Therefore, the method carried out by the system involves constructing four-dimensional image data from the series of projection images, wherein the four-dimensional image data comprises a representation of the object from each of the plurality of angles for each of the plurality of time-points and providing (i.e. displaying in real-time) the four-dimensional image data as guidance information (i.e. roadmap) for a surgical procedure associated with the surgical instrument (i.e. tool 105).). Bernard does not teach “wherein at each time-point, a plurality of projection images are acquired”. Ullberg is within the same field of endeavor as the claimed invention because it involves an apparatus and method for obtaining tomosynthesis data (see [0007]). Ullberg teaches “wherein at each time-point, a plurality of projection images are acquired” (“providing a divergent radiation source emitting radiation centered around an axis of symmetry, and a radiation detector comprising a stack of line detectors, each being directed towards the divergent radiation source to allow a ray bundle of the radiation that propagates in a respective one of a plurality of different angles to enter the line detector after having been transmitted through an object to be examined, and moving the radiation source and the radiation detector relative the object linearly in a direction orthogonal to the axis of symmetry, while each of the line detectors records line images of radiation as transmitted through the object in a respective one of the different angles, a plurality of two-dimensional images can be formed, where each two-dimensional image is formed from a plurality of line images as recorded by a single one of the line detectors. Thus, a plurality of two-dimensional images at different angles are produced in a single scan, which reduces the detection time by a factor corresponding to the number of two-dimensional images produced. The data from the apparatus is excellent to be used in tomosynthesis or laminographic imaging” [0011]. Therefore, since a plurality of two-dimensional images at different angles are produced in a single scan, the method carried out by the apparatus acquires multiple projection images simultaneously. Therefore, at each time point (i.e. within the single scan) a plurality of projection images are acquired.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the digital tomosynthesis method, device, and system of Bernard such that at each time-point, a plurality of projection images are acquired as disclosed in Ullberg in order to reduce detection time (see Ullberg: [0011]) and therefore limit the amount of ionizing radiation that is absorbed by the patient being examined. Acquiring multiple projection images (i.e. from multiple angles) simultaneously (i.e. via the radiation detector 6 containing the radiation detectors 6a in Ullberg FIG. 1) is one of a finite number of techniques which can be used to reduce detection time as well as limit the amount of ionizing radiation absorbed by a patient with a reasonable expectation of success. Thus, modifying the digital tomosynthesis method, device, and system of Bernard such that the detector 40 includes the radiation detectors 6a disclosed in Ullberg would yield the predictable result of enabling the detector to obtain a plurality of projection images in a single scan (i
Read full office action

Prosecution Timeline

Dec 05, 2022
Application Filed
Aug 13, 2024
Non-Final Rejection — §103
Feb 13, 2025
Response Filed
Apr 07, 2025
Final Rejection — §103
Jul 16, 2025
Request for Continued Examination
Jul 21, 2025
Response after Non-Final Action
Jul 24, 2025
Non-Final Rejection — §103
Oct 27, 2025
Response Filed
Dec 02, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599359
ULTRASOUND DIAGNOSTIC APPARATUS AND METHOD OF CONTROLLING ULTRASOUND DIAGNOSTIC APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12594125
VISUALIZATION SYSTEM AND METHOD FOR ENT PROCEDURES
2y 5m to grant Granted Apr 07, 2026
Patent 12594052
METHOD AND DEVICE FOR LOCALIZING A VEIN WITHIN A LIMB
2y 5m to grant Granted Apr 07, 2026
Patent 12582385
SYSTEMS AND METHODS FOR ULTRASOUND IMAGING
2y 5m to grant Granted Mar 24, 2026
Patent 12575759
MEDICAL IMAGE DIAGNOSTIC APPARATUS, COUCH DEVICE, AND CONTROL METHOD
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
73%
Grant Probability
93%
With Interview (+20.7%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 315 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month