Prosecution Insights
Last updated: April 19, 2026
Application No. 18/016,659

SYSTEM AND METHOD FOR TIME-SERIES IMAGING

Non-Final OA §103§112
Filed
Jan 17, 2023
Examiner
MENDEZ MUNIZ, DYLAN JOHN
Art Unit
2675
Tech Center
2600 — Communications
Assignee
BOSTON SCIENTIFIC CORPORATION
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
15 granted / 18 resolved
+21.3% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
15 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
16.3%
-23.7% vs TC avg
§103
44.8%
+4.8% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 18 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted were filed on 01/17/2023 and 04/12/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Response to Arguments Regarding claims 25-29,31-33,35-37 and 40, more specifically claim 25, applicant argues that the asserted combination of Wolthaus and Brehm does not teach the newly amended claim 25, more specifically that Brehm does not teach the added limitation of “physiological motion cycle results in less artifacts in the motion compensated 3D volumetric medical image than the dominant motion”. Examiner considers the new amendments, however the added limitation of “physiological motion cycle results in less artifacts in the motion compensated 3D volumetric medical image than the dominant motion” now raises a 112(b) issue since it has problems regarding its clarity under a BRI (broadest reasonable interpretation). Examiner interprets the limitation, for examination purposes, as any result that also utilizes the secondary motion and also eliminates motion artifacts. In addition, Brehm also teaches the other part of the limitation “estimating… a secondary motion…” and “constructing a 3D volumetric derived image by combining data from respective one of the plurality of motion compensated 3D volumetric images based on the estimated secondary motion”, in paragraph 4 and all of fig. 10A, more specifically 1016 (this figure shows a determination of volumetric images based on the combination of breathing and cardiac registrations, which falls withing the BRI (broadest reasonable interpretation) of “combining data from the motion compensated 3D volumetric image based on the estimated secondary motion”). Therefore claim 25 remains rejected, and all of the claims 25-29,31-33,35-37 and 40 remain rejected under USC 103. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 25-29, 31-33, 35-37 and 40 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 25, the claim recites “wherein the secondary physiological motion cycle results in less artifacts in the motion compensated 3D volumetric medical image than the dominant motion.” It is unclear from the context of the claim what is being used from the results to compare to the dominant motion, and what is being used from the dominant motion to compare to the results of the secondary cycle. One of ordinary skill in the art would ask “What exactly from the second physiological motion cycle result is being used to compare to the dominant motion?”, “What exactly from the dominant motion is being used to compare to the second motion result?” “Is it an image, a measure, a 2d image, a 3d image, another motion compensated image or a 4d image?”, “Is the comparison possible? ”. Therefore one of ordinary skill in the art would not be able to apprise the scope of the claim for reasons regarding clarity. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 25-29, 31-33, 35-37 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Wolthaus et. al, hereafter Wolthaus (J. W. H. Wolthaus, J.J. Sonke, M. van Herk, E. M. F. Damen ,Reconstruction of a time-averaged midposition CT scan for radiotherapy planning of lung cancer patients using deformable registration, 11 August 2008) in view of Brehm et. al, hereafter Brehm (US Publication No. 20170249740 A1) As per claim 25, Wolthaus teaches “a method for creating a motion compensated 3D volumetric derived image from a 4D volumetric medical image comprising: acquiring a 4D volumetric medical image comprising a plurality of 3D volumetric medical images forming a time-series of 3D volumetric medical images representing phases of a physiological motion cycle;” (See abstract and introduction in Wolthaus “We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced.” The motion of structures in Wolthaus represents the phases of a physiological cycle Wolthaus . See paragraph 144 and fig. 10A. in Brehm “[0144] More specifically, at 1002, 4D images for each of the breathing bins are determined…” 4d images are determined for each breathing phase bin (a time series of 3d volumetric medical image representing phases of a physiological cycle). See also fig. 1 on page 2 and fig. 3 on page 4, it shows a plurality of 3D volumetric images, and also shows the plurality of motion compensated 3d images.) “determining, for each of the plurality of 3D volumetric medical image, a dominant motion with respect to a reference location that represents a position relative to the motion cycle;” (See abstract and II. Methods and Materials. Examiner interprets “dominant motion” (within a BRI) as any physiological cycle motion set as primary, first, or bigger, stronger motion, and “a reference location that represents a position relative to the motion cycle” as the local mean position and also as the midposition (MidP) throughout the document. Abstract “From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position.” II. Methods and Materials “In the framework to create a 3D MidP CT scan (Fig. 1), the physical motion in the 4D CT scan is estimated from each frame and subsequently compensated to the time-weighted mean position, thereby eliminating motion. Averaging these frames of the motion-compensated 4D CT scan (over time) results in a 3D CT scan with reduced artifacts and noise: The MidP CT scan.” See also fig. 1 on page 2 and fig. 3 on page 4, it shows a plurality of 3D volumetric images, and also shows the plurality of motion compensated 3d images. Wolthaus) “resampling each of the plurality of 3D volumetric medical images to the reference location to generate a plurality of motion compensates 3D volumetric medical images and compensate for the dominant motion;” (See abstract and Methods and materials section. Examiner interprets resampling as a modification, Abstract “4D DVF was modified to deform all structures of the original 4D CT scan to this mean position.” “From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position.” Wolthaus” “In the framework to create a 3D MidP CT scan (Fig. 1), the physical motion in the 4D CT scan Is estimated from each frame and subsequently compensated to the time-weighted mean position, thereby eliminating motion. Averaging these frames of the motion-compensated 4D CT scan (over time) results in a 3D CT scan with reduced artifacts and noise: The MidP CT scan.” See also fig. 1 on page 2 and fig. 3 on page 4, it shows a plurality of 3D volumetric images, and also shows the plurality of motion compensated 3d images. Wolthaus). However Wolthaus does not completely teach “estimating… a secondary motion resulting from a secondary physiological motion cycle; and constructing… a 3D volumetric derived image by combining data from the respective one of the plurality of motion compensated 3D volumetric images based on the estimated secondary motion, wherein the secondary physiological motion cycle results in less artifacts in the motion compensated 3D volumetric medical image than the dominant motion. ” Brehm teaches “estimating, for each of the plurality of motion compensated 3D volumetric medical images, a secondary motion resulting from a secondary physiological motion cycle” (See fig 10A 1012, Examiner interprets “secondary motion” as either the cardiac registration or breathing registration. See also paragraph 2 “For the case in which the target region moves in a periodic motion (e.g., due to breathing and/or cardiac motion), the CT system may be used to determine volumetric images of the target when the target is at different breathing states and/or cardiac states, so that the volumetric images may be played back as respective video streams for breathing motion and cardiac motion. To this end, projection images of a target patient at various breathing and cardiac states are obtained.” See also Paragraph 202, where Brehm explains that numbers do not necessarily convey order. “[0202] It should be noted that the terms “first”, “second”, “third”, etc., are used to distinguish different items, and do not necessarily convey order. See also paragraph 166 “[0166] In one or more embodiments, a self-correction technique may be implemented in order to reduce and/or eliminate residual approximation errors associated with applying the various registrations to projection images from other combined phase bins in order to generate more additional volumetric images for a particular combined phase bin. IIn order to perform a self-correction of the motion estimation process of the respective physiological cycle, a synthetic dataset may be created through a simulated measurement process of various projection images.” See also fig. 10A and 4. Brehm) “and constructing, for each of the plurality of motion compensated 3D volumetric medical images, a 3D volumetric derived image by combining data from the respective one of the plurality of motion compensated 3D volumetric images based on the estimated secondary motion” (See fig 10A 1016, it shows a determination of 3d volumetric image based on the combination of respiratory motion and cardiac motion (which includes the secondary motion). Brehm. See also Paragraph 4 “[0004] However, when taking both breathing motion and cardiac motion in account, improved approaches are required to appropriately sort the various images into respective combined phase bins that correspond to both breathing motion and cardiac motion, and to accurately reconstruct volumetric images for these combined phase bins”. See also paragraphs 80 and 114-117 “[0115]… In some embodiments, each of the volumetric images in the sequence may be determined (e.g., constructed) using all of the projection images P1-P9 from the different amplitude bins. In other embodiments, one or more of the new volumetric images may be determined using one or more of the projection images, but not all, from each amplitude bin.”. See also paragraphs 149-154 and 65-70. Brehm) “wherein the secondary physiological motion cycle results in less artifacts in the motion compensated 3D volumetric medical image than the dominant motion.” (See paragraph 166, the results used from the secondary motion are also preprocessed and reconstructed to have eliminate motion artifacts that may have been present before (which utilizes the primary/dominant motion as those seen on fig. 10A). The combined registration includes the results of the secondary motion, and it eliminates the motion artifacts, therefore it will have less than only the dominant motion is being used. Therefore, it covers the interpretation under a BRI (broadest reasonable interpretation). “[0166] In one or more embodiments, a self-correction technique may be implemented in order to reduce and/or eliminate residual approximation errors associated with applying the various registrations to projection images from other combined phase bins in order to generate more additional volumetric images for a particular combined phase bin. In order to perform a self-correction of the motion estimation process of the respective physiological cycle, a synthetic dataset may be created through a simulated measurement process of various projection images. This synthetic dataset may be preprocessed and reconstructed in order to eliminate any motion artifacts. The resulting volumetric image may be forward projected at the same gantry positions where the original projection images were generated. In some cases, registration(s) R is determined on volumetric image(s) from synthetic dataset, and the registration(s) R is compared to other R in order to determine an intrinsic error. In other embodiments, the resulting volumetric image may be compared against the original volumetric image in order to determine an intrinsic error in the motion estimation process. The determined intrinsic error may then be used in order to perform a self-correction of the various registrations between one combined phase bin to another combined phase bin.” Brehm) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Wolthaus and Brehm to include a secondary motion and also eliminate motion artifacts. The modification would have been motivated by the desire to construct a better motion compensated volumetric image that improves diagnosis and treatment planning since by taking into account both cardiac and respiratory motions (additional bodily motions), instead of only estimating one motion, it has more complete motion estimation of the body, in addition, the motion artifacts are eliminated in order to have corrected registrations, as suggested by Brehm (See Paragraph 2 “[0002] Sometimes, for diagnostic purposes and/or for radiation treatment planning, the target region of the patient may be imaged using a CT system. For example, CT imaging may have interventional and surgical applications, e.g., checking if inserted heart valve is working properly. As another example, cone beam CT (CBCT) imaging may be performed to check patient positioning, setup patient, etc. … For the case in which the target region moves in a periodic motion (e.g., due to breathing and/or cardiac motion), the CT system may be used to determine volumetric images of the target when the target is at different breathing states and/or cardiac states, so that the volumetric images may be played back as respective video streams for breathing motion and cardiac motion. To this end, projection images of a target patient at various breathing and cardiac states are obtained.” See also paragraph 166 “ In order to perform a self-correction of the motion estimation process of the respective physiological cycle, a synthetic dataset may be created through a simulated measurement process of various projection images. This synthetic dataset may be preprocessed and reconstructed in order to eliminate any motion artifacts.” Brehm) As per claim 26, Wolthaus in view of Brehm already teaches “the method as claimed in claim 25 wherein estimating the secondary motion comprises”, however Wolthaus in view of Brehm also teaches determining the secondary motion in each of the plurality of motion compensated 3D volumetric medical images (See fig. 10A 1012 Brehm) and each of the plurality of 3D volumetric medical images. (See fig. 2 204. The second motion (any of the physiological cycle) is used by a volumetric image. Brehm) As per claim 27, Wolthaus in view of Brehm already teaches “The method as claimed in claim 25, wherein the reference location is one of;”, however Wolthaus teaches at least one of “the location within the motion cycle in the 3D medical image; (See fig. 2, Wolthaus) an extremum location of the motion cycle; (See fig. 2, Wolthaus) a geometric combination of the location within the motion cycle in the 3D volumetric medical image; (See fig. 2, Wolthaus) or a mid-position location of the motion cycle.” (See figure 2, it teaches the use of a weighted mean position (mid-position of the motion cycle) for the respiratory motion. Wolthaus) As per claim 28, Wolthaus in view of Brehm already teaches “the method as claimed in claim 25, wherein determining the dominant motion”, however Wolthaus teaches “comprises determining an image registration of each of the plurality of original 3D volumetric images to the reference location.” (See sections II C Rigid registration and II D Deformable registration. Rigid registration is an image registration of each image to the reference location (region of interest) “To obtain this scan, first the tumor motion in the 4D CT scan was determined. To that end, a shaped region of interest (ROI) was manually defined in a reference CT frame, roughly encompassing the visual tumor. This ROI was subsequently registered to each frame of the 4D scan based on the correlation ratio of all voxels within the ROI to obtain a motion curve.” Wolthaus) As per claim 29, Wolthaus in view of Brehm already teaches “The method as claimed in claim 28, wherein determining an image registration of each of the plurality of 3D volumetric image to the reference location comprises one of;”, however Wolthaus also teaches “computing a direct image registration between the 3D volumetric image and a 3D volumetric image representing the reference location; or” (See in section II Methods and Materials; Deformable registration, Rigid registration and Fig. 10. In II. C Rigid registration the midventilation scan encompasses an image registration between an original frame and the reference location. “The midventilation (MidV) CT scan is a single 3D CT frame of a 4D dataset, with the tumor closest to its mean position. To obtain this scan, first the tumor motion in the 4D CT scan was determined. To that end, a shaped region of interest (ROI) was manually defined in a reference CT frame, roughly encompassing the visual tumor. This ROI was subsequently registered to each frame of the 4D scan based on the correlation ratio of all voxels within the ROI to obtain a motion curve” Wolthaus) and Brehm also teaches “composition of an image registration from the original 3D volumetric image to a second 3D volumetric images with a registration from the second 3D volumetric image to the reference location.” ( See fig 10A; 1010, 1012 and 1016. Brehm) As per claim 31, Wolthaus in view of Brehm already teaches “the method as claimed in claim 25, wherein the secondary motion”, however Wolthaus also teaches “varies spatially within the plurality of 3D volumetric medical images.” (See fig. 2 and abstract. Examiner interprets “varies spatially within the 3D volumetric images” as any location in the volumetric image, which Wolthaus’s invention can already do. Abstract “ From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position.” Wolthaus ) As per claim 32, Wolthaus in view of Brehm already teaches “the method as claimed in claim 25, wherein the plurality of 3D volumetric medical images are images of an abdominal or a thoracic cavity” (See abstract and Introduction. The MidP CT scan is a 3d image of the abdominal or thoracic. Wolthaus. See also Paragraph 2 in Brehm “ For the case in which the target region moves in a periodic motion (e.g., due to breathing and/or cardiac motion), the CT system may be used to determine volumetric images of the target when the target is at different breathing states and/or cardiac states, so that the volumetric images may be played back as respective video streams for breathing motion and cardiac motion” Brehm) However, Wolthaus and Brehm also teach “wherein the dominant motion is respiratory motion or cardiac motion,” (See abstract and Introduction. The MidP CT scan is a 3d image of the abdominal or thoracic. Wolthaus. See also on Brehm all of Fig. 10A, 1002, 1004, 1008, 1012. Brehm) As per claim 33, Wolthaus in view of Brehm already teaches “the method as claimed in claim 28”, however Wolthaus and Brehm teach “wherein the image registration is a deformable registration.” (See II. Methods and Materials D. Deformable registration. Wolthaus . See also Brehm Paragraph 34 and 35. ) As per claim 35, Wolthaus in view of Brehm already teaches “the method as claimed in claim 25, wherein estimating the secondary motion is made according to at least one of:” however Wolthaus also teaches at least one of “an estimate of blur or sharpness in the plurality of 3D volumetric medical images;” (Examiner interprets estimate of blur or sharpness as any indication of blur or sharpness, in this case noise. See III. Confidence measure. “Some motion constraints ck(x) are unreliable, e.g., these constraints correspond to small or low-intensity (weak) features (or features that exist only in one of the two images) and noise.” Wolthaus. See also II. D. Deformable registration. “First, an image processing operation (Appendix—Quadrature image processing filter section, Fig. 10) was applied to the reference and floating scans, to convert the image into image-phase data (gray-value transitions from bright to dark and vice versa).” Wolthaus) regional image intensity in the plurality of 3D volumetric medical images; (Examiner interprets “regional intensity” as any indication of a contrast in a region. See fig. 3. It shows a contrast. Wolthaus) a measurement of the difference between the plurality of 3D volumetric medical images and a sharpened version of the plurality of 3D volumetric medical images; (Examiner interprets “a measurement” as any measurement. See IV.C. Quantification of tumor size, shape changes, and image fidelity, Fig. 8 “The dashed lines are a guide to the eye to appreciate the difference in apparent tumor size.” One image in fig. 8 is sharper than the other. See also III. Confidence Measure. Wolthaus) In addition Brehm teaches “or an estimate of the location of a 2D slice of the plurality of 3D volumetric medical images within a secondary motion cycle.” (See paragraph 158 in Brehm. Examiner interprets limitation as a 2D-3D registration.) As per claim 36, Wolthaus in view of Brehm already teaches “the method as claimed in claim 35, wherein the estimate of the location of a 2D slice of the plurality of 3D volumetric medical images within the secondary motion cycle comprises one or more of:”, however Wolthaus also teaches “using automatic image segmentation to determine a change in anatomy;” (See fig. 3, fig. 8 and fig. 9. The images have segments. Wolthaus); “using image processing techniques;” (Wolthaus teaches the use of image processing techniques. See abstract, II. D. Deformable registration. “First, an image processing operation (Appendix—Quadrature image processing filter section, Fig. 10) was applied to the reference and floating scans, to convert the image into image-phase data (gray-value transitions from bright to dark and vice versa).” ); “or using a size of an automatically contoured anatomical region within the 2D slice.” (See Figs. 3, 4 and 7. Wolthaus) Wolthaus does not teach “using a measurement device to measure a physiological signal related to the secondary motion;”; “fitting a periodic signal to time ordered data;” and “using a time stamp of an acquisition of the 2D slice”. Brehm teaches “using a measurement device to measure a physiological signal related to the secondary motion;” (See paragraph 74 and fig. 4 in Brehm.), “fitting a periodic signal to time ordered data;” (See Paragraph 74 and fig. 4 in Brehm), “using a time stamp of an acquisition of the 2D slice;” (See paragraph 74, 77 and Fig. 4 Brehm.) As per claim 37, Wolthaus in view of Brehm already teaches “the method as claimed in claim 35, wherein the estimate of the image blur or sharpness is made according to at least one of:”, however Wolthaus also teaches at least one of the following “image gradient information calculated using differential techniques;” (See II. Optical flow estimation. Wolthaus) “image gradient information calculated using frequency techniques;” (See IV. E. Quantification of noise. “Since median averaging preserves higher frequencies more than normal averaging (Sec. IV E), the SD is a bit higher.” Wolthaus) “image gradient information calculated using mathematical morphology techniques;” (First, an image processing operation (“Appendix—Quadrature image processing filter section, Fig. 10) was applied to the reference and floating scans, to convert the image into image-phase data (gray-value transitions from bright to dark and vice versa”. Wolthaus ) (See also III. C. Quantification of noise and fig. 6 “The noise of the MidPmedian scan was slightly higher than for the MidPmean scan (8%).” Is a difference from a sharpened version. The image with less noise is interpreted as a sharpened version of the same image. Wolthaus) “or using a machine learning model to estimate the degree of image blur or sharpness. (See II. Methods and Materials. “Motion estimation is a large field of research and there are several algorithms, often using a similarity measure to drive the registration. In this paper, motion between two frames of the 4D CT scan was determined using a phase-based optical flow motion estimation procedure based on the work of Hemmendorff.”. Examiner also notes that “machine learning model” is too broad and not specific. In addition, examiner notes that in the specification of the applicant, a reference is provided for estimation of image sharpness/blur with machine learning. Wolthaus ) As per claim 40, Wolthaus in view of Brehm already teaches “the method as claimed in claim 25, wherein constructing the 3D volumetric derived image comprises;”, however Wolthaus also teaches at least one of “ or combining an intensity values of the selected data at each voxel;” (See fig. 10 and II. Optical flow estimation, equation A3. Examiner interprets “intensity” as any brightness/contrast influenced value. Wolthaus) “ and setting the intensity values of the plurality of 3D volumetric derived images according to the combined intensity values.” (See fig. 10 and II. Optical flow estimation, equation A3. Examiner interprets “intensity” as any brightness/contrast influenced value. Wolthaus) Wolthaus does not teach “selecting data from at least one of the plurality of motion compensated 3D volumetric medical images based on the secondary motion estimate;”. Brehm teaches “selecting data from at least one of the plurality of motion compensated 3D volumetric medical images based on the secondary motion estimate;” (See fig 10A 1016, determines volumetric image based on the combination of respiratory and cardiac. Brehm) Pertinent Prior Art Hu et. al. (US 20200352524 A1) also discloses the elimination of motion artifacts in breathing or cardiac motions (See paragraph 59.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN J MENDEZ MUNIZ whose telephone number is (703)756-5672. The examiner can normally be reached M-F, 8AM - 5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DYLAN JOHN MENDEZ MUNIZ/Examiner, Art Unit 2675 /ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675
Read full office action

Prosecution Timeline

Jan 17, 2023
Application Filed
Apr 17, 2025
Non-Final Rejection — §103, §112
Jul 23, 2025
Response Filed
Oct 07, 2025
Final Rejection — §103, §112
Dec 09, 2025
Response after Non-Final Action
Jan 09, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Mar 25, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597231
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12573053
Image Shadow Detection Method and System, and Image Segmentation Device and Readable Storage Medium
2y 5m to grant Granted Mar 10, 2026
Patent 12573040
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12567127
MEDICAL USE IMAGE PROCESSING METHOD, MEDICAL USE IMAGE PROCESSING PROGRAM, MEDICAL USE IMAGE PROCESSING DEVICE, AND LEARNING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12555175
METHOD FOR EMBEDDING INFORMATION IN A DECORATIVE LABEL
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+25.0%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 18 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month