Prosecution Insights
Last updated: April 19, 2026
Application No. 18/135,482

Image Display Preset System and Method for C-Arm Imaging System

Final Rejection §102§103
Filed
Apr 17, 2023
Examiner
HASKINS, TWYLER LAMB
Art Unit
2639
Tech Center
2600 — Communications
Assignee
GE Precision Healthcare LLC
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
2y 0m
To Grant
42%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
21 granted / 36 resolved
-3.7% vs TC avg
Minimal -16% lift
Without
With
+-16.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
8 currently pending
Career history
44
Total Applications
across all art units

Statute-Specific Performance

§101
8.0%
-32.0% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
34.7%
-5.3% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file. Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/06/2023 is in compliance with the provisions on 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Objection to Drawings The objection to the drawings still stands. The Replacement Drawing submitted by the applicant on 7/8/2025 appear to be same Figure 1 originally submitted with the missing element numbers and is still objected to because of the following informalities: Figure 1 has missing element numbers in the figure. In the specification, paragraph [0033] mentions “… a number of wheels 105, and…grasping handles 107”, the figure appears to depict both but the element numbers are missing. Appropriate correction is required. Response To Arguments Applicant's arguments filed 07/08/2025 have been fully considered but they are not persuasive. Applicant argues that Zeihm (DE 202019002619 U1) does not disclose “the determination of the distribution of radiation attenuation values across the selected at least one portion of the 3D volume to determine a first window level and a first window width for a first material type represented in the distribution of radiation attenuation values to form a first window preset and to determine a second window level and a second window width for a second material type represented in the distribution of radiation attenuation values to form a second window preset” as required by claims 2 and 16. Examiner respectfully disagrees. Regarding the argument above, Zeihm teaches (a) analyzing the voxel/attenuation distribution in a selected portion of a reconstructed 3D volume using HU/percentile-based statistics, (b) using those statistics to compute adapted windowing, and (c) computing different windowing values for different target classes (bone, soft tissue, foreign object components) i.e., multiple window settings/presets. These teachings map directly to “determine the distribution of radiation attenuation values across the selected at least one portion of the 3D volume” and to “determine a first/second window level and window width…to form…window presets.” The following identifies where Zeihm discloses the claimed subject matter and explains why the Applicant’s argument is not persuasive. Regarding the claimed “the determination of the distribution of radiation attenuation values across the selected at least one portion of the 3D volume”, Zeihm expressly teaches a data-based windowing strategy that computes windowing from percentiles of the recorded dataset: “When using the data-based strategy, at least one or more percentiles P1…Pn of the total recordings can be used to calculate an adapted windowing function f(P1,P2,…,Pn).” (Zeihm, from the IP.com English Translation, see page 5, paragraph 9 – page 7, paragraph 5). Percentiles are derived from the statistical distribution of voxel attenuation values (Hounsfield units or equivalent). This is a direct disclosure of analyzing the attenuation distribution across the volume (and therefore across a selected portion when subset selection is described). Zeihm further discloses special treatment of sections containing foreign objects and the ability to select slice/section planes and regions: Zeihm states that “a special treatment of the sections in which one or more foreign objects are located can preferably take place,” and describes selection of layers/sectional planes and MPR/3D views. These passages support selection of “the selected at least one portion” and computing distributional statistics on that selected portion. “Determine a first window level and a first window width for a first material type … to form a first window preset” Zeihm repeatedly discusses computing adapted windowing values and tailoring windowing to specific targets: for example, “the windowing can be adapted specifically to the representation of organs or the anatomy, preferably to the individual foreign object components that are of interest for a particular application.” Zeihm also discloses that “based on the segmentation, … a subset of the voxels of the foreign objects, or a subset of the voxels of the individual foreign objects, or a subset of the voxels of the individual foreign object components, is taken into account so that windowing values are calculated, which are particularly suitable for viewing / displaying bone structures, soft tissues and/or one or more foreign objects / individual foreign objects / individual foreign object components.” This teaching explicitly contemplates calculation of windowing values directed to particular material/tissue classes (bone, soft tissue, foreign object). The claimed “first window level/width for a first material type” is a routine and expected form of such “windowing values.” “Determine a second window level and a second window width for a second material type … to form a second window preset” Zeihm’s disclosure of calculating windowing values for multiple classes (bone, soft tissue, foreign objects) and determining “different windowing settings for layer representations and volume rendering” inherently teaches computing more than one set of windowing parameters. Thus, Zeihm discloses deriving distinct window settings (i.e., multiple presets) corresponding to different material/tissue classes (the claimed “first” and “second” presets). The Examiner finds the Applicant’s reading of the limitation to be narrow in scope and unpersuasive for the following reasons: The Applicant appears to require explicit claim-term language such as “determine window level/window width values WL and WW.” However, Zeihm’s explicit references to (i) data-based percentile analysis of voxel values, (ii) tailoring windowing to particular materials and objects, and (iii) selecting subsets of voxels (via segmentation or section selection) for window computation are functional equivalents that disclose the same technical subject matter as the claimed steps. Zeihm uses the term “windowing,” “windowing values,” and an “adapted windowing function f(P1,P2,…,Pn),” which is the same concept as defining a window level and window width that map voxel attenuation to display grayscale. Thus, Zeihm discloses both the computational analysis of attenuation distributions and the outcome of that analysis — window parameter(s) tuned to specific material classes. For the foregoing reasons, Zeihm discloses (i) determining the distribution of attenuation values (via percentile/statistical analysis) across a selected portion of a reconstructed 3D volume, (ii) deriving windowing parameters tailored to distinct material/tissue classes, and (iii) computing multiple window settings that may be applied for different renderings and materials. Therefore, Zeihm discloses the technical concepts recited by the Applicant’s quoted limitation of “determine the distribution … to determine a first window level and width … and determine a second window level and width … to form … window presets.” Applicant further argues that Zeihm (DE 202019002619 U1) does not disclose “any first thickening algorithm or second thickening algorithm applied with the corresponding first window preset or the second window preset as required by claim 10. Examiner respectfully disagrees. Zeihm does not use the term “thickening algorithm” explicitly, but describes applying different rendering/display techniques (volume rendering, transparency, adaptive highlighting) per object/class or viewing context, which is the functional equivalent of using different “thickening” or projection algorithms for different presets. Regarding the argument above, Zeihm expressly teaches (i) applying different rendering/representation pipelines, (ii) computing distinct windowing values for different representations and material classes, and (iii) compositing/visibility rules applied over voxel depth — each of which is the functional equivalent of applying different “thickening algorithms” together with corresponding window presets. Zeihm includes: Data-based windowing derived from attenuation distribution: “When using the data-based strategy, at least one or more percentiles P1…Pn of the total recordings can be used to calculate an adapted windowing function f(P1,P2,…,Pn).” (Zeihm, Description to include, (Zeihm, from the IP.com English Translation, see page 5, paragraph 9 – page 7, paragraph 5)). This teaches deriving window parameters (i.e., windowing) from voxel attenuation statistics. Different windowing per representation: “It is also intended to determine different windowing settings for layer representations and volume rendering.” (Zeihm, Description). This teaches pairing distinct window settings with distinct rendering modes (slice/MPR vs. volume compositing). Multi-voxel compositing and depth-dependent visualization: Zeihm discloses “the option of transparently displaying one or more foreign objects … Transparency can vary from 0% (solid representation) to 100% (fade-out) … adapt this transparent representation in the volume representation of the current layer representation … the distance of all foreign objects … can be calculated in real time and the transparency … chosen the more distant these objects are from the selected layer.” (Zeihm, Description). These passages describe compositing/aggregation rules applied across voxel depth (i.e., thickening/slab/volume compositing behavior) in combination with windowing and object selection. Claim 10, requires (a) a first thickening algorithm applied with a first window preset and (b) a second thickening algorithm applied with a second window preset. Zeihm discloses (1) at least two different rendering/compositing contexts (layer/MPR vs. volume rendering), (2) computation of distinct window settings for those contexts and for different material classes (bone, soft tissue, foreign objects), and (3) depth-dependent compositing/visual rules (transparency, distance weighting). Thus Zeihm discloses pairing distinct compositing/thickening behavior with corresponding window settings — i.e., the claimed first and second thickening algorithms applied with corresponding first and second window presets. Obviousness alternative Even if “thickening algorithm” were construed narrowly to require specific named projection methods (e.g., MIP, average slab), it would be a routine design choice to select differing projection/compositing methods for different material targets in view of Zeihm’s explicit teaching to (i) compute different windowing for different materials and (ii) apply different display behavior for layer vs. volume rendering. Selecting appropriate thickening/projection methods to emphasize different materials alongside corresponding window settings would have been obvious to one of ordinary skill in the art. For the reasons above, Zeihm discloses — and at minimum renders obvious — the claimed application of distinct thickening/compositing algorithms together with corresponding window presets as recited in claim 10. Applicant’s contention is therefore not persuasive. Accordingly, the Applicant’s argument that Zeihm does not disclose these steps is not persuasive. The prior art denial of novelty (or, alternatively, combination for obviousness) remains supported for the reasons set forth in the Office Action. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 2, 5, 8-17 and 20-21 are rejected under 35 U.S.C. 102 as being anticipated by ZIEHM IMAGING GMBH (DE 202019002619) (herein after referred to as ZIEHM). Regarding claim 2, ZIEHM teaches a method for adjusting a presentation of an image presented on a display of a radiography imaging system (ZIEHM, from the IP.com English Translation, see page 6, paragraph 1, [The solution to the problem is…on a display device using windowing], Abstract [displaying a 3D volume…using windowing]), the method comprising the steps of: providing a radiography imaging system (ZIEHM, see Figs. 1 and 4 [C-arm or CT system with detector, source, computer, display, GUI], C-arm X-ray system 11, and from the IP.com English Translation, see page 8, paragraph 6, [The C-arm X-ray system 11…]) comprising: a radiation source (ZIEHM, see Fig. 1, C-arm X-ray system11, X-ray generator 13, and from the IP.com English Translation, see page 8, paragraph 8, [The C-arm 18 carries an x-ray generator at one end 13…]); a detector (ZIEHM, see Fig. 1, detector 12) alignable with the radiation source (ZIEHM, see Fig. 1, x-ray generator 13), the detector having a support (ZIEHM, see Fig. 1, C-arm 18) on or against which a subject to be imaged is adapted to be positioned (ZIEHM, see Fig. 1 C-arm 18 supports patient positioning) PNG media_image1.png 380 594 media_image1.png Greyscale iii. a computing device (ZIEHM, see Fig. 1, computer 120, storage unit 121, reconstruction unit 122, control unit 123, image processing unit 124, network interface 125 ) operably connected to the detector (ZIEHM, see Fig. 1, detector 12) to generate image data in an imaging procedure performed by the imaging system, the computing device including a processor (ZIEHM, see Fig. 1, image processing unit 124) and an interconnected database containing machine-readable instructions for the operation of the processor and for processing the image data from the detector to create one or more 2D images of a subject (ZIEHM, see fig. 1, and from the IP.com English Translation, see page 8, paragraphs 8-10); iv. a display operably connected to the computing device for presenting the one or more 2D images to a user (ZIEHM, see Fig. 1, display devices 17, and from the IP.com English Translation, see page 8, paragraphs 9-10); and v. a user interface operably connected to the computing device to enable user input to a control processing unit (ZIEHM, see Fig. 1, input unit 19, GUI 124, control unit 123, software and user interaction; and from the IP.com English Translation, see page 8, paragraph 8-9); b. positioning the subject between the radiation source and the detector (ZIEHM, see Fig. 1, C-arm 18 with positioning [standard workflow for patient positioning, implicit in Fig. 1); c. operating the radiation source to generate a plurality of projection images of the subject (ZIEHM, Fig 1 C-arm 11 collects projections for 3D reconstruction, from the IP.com English Translation, see page 8, paragraphs 8-9); d. reconstructing a 3D volume from the plurality of projection images (ZIEHM, Fig 1, storage unit 121, reconstruction unit 122, Abstract and from the IP.com English Translation, see page 8, paragraphs 9-10); e. determining a distribution of radiation attenuation values from at least one portion of the 3D volume (ZIEHM, from the IP.com English Translation, see page 5, paragraph 9 – page 7, paragraph 3 [“distribution of gray values”, windowing… based on percentiles”, “HU value based strategy”); and f. determining a window preset from the distribution of radiation attenuation values corresponding to each type of material represented in the distribution of radiation attenuation values (ZIEHM, from the IP.com English Translation, see page 7, paragraphs 2-5 [“windowing…adapted specifically to…organs…individual foreign object components”]). wherein the processor includes a window generating module (ZIEHM, from the IP.com English Translation, see page 7, paragraph 2-4 [“windowing…subset of voxels…for bone, soft tissue, or foreign object”]), and wherein the window generating module is operable to: select the at least one portion of the 3D volume (ZIEHM, from the IP.com English Translation, see page 6, paragraph 10 – page 7, paragraph 5 [It is envisaged that a user will…out on the 3D view…[“windowing…subset of voxels…for bone, soft tissue, or foreign object”],...For better orientation of the user in the 3D volume…); determine the distribution of radiation attenuation values across the selected at least one portion of the 3D volume (ZIEHM, from the IP.com English Translation, see page 6, paragraph 10 – page 7, paragraph 5 [It is envisaged that a user will…out on the 3D view…[“distribution of gray values,” “percentiles…of the total recordings…or subset”],…For better orientation of the user in the 3D volume…); determine a first window level and a first window width for a first material type represented in the distribution of radiation attenuation values to form a first window preset (ZIEHM, from the IP.com English Translation, see page 6, paragraph 10 – page 7, paragraph 5 [It is envisaged that a user will…out on the 3D view…[“windowing values…for viewing bone structures, soft tissues and/or…foreign objects”],…For better orientation of the user in the 3D volume…); and determine a second window level and a second window width for a second material type represented in the distribution of radiation attenuation values to form a second window preset (ZIEHM, from the IP.com English Translation, see page 6, paragraph 10 – page 7, paragraph 5 [It is envisaged that a user will…out on the 3D view…[“windowing values…for…soft tissues and/or…foreign objects/components”],…For better orientation of the user in the 3D volume…). Regarding claim 5, ZIEHM teaches the method of claim 2, and teaches further comprising the step of presenting the window image preset as a selectable icon on the display (ZIEHM, from the IP.com English Translation, see page 6, paragraphs 5-7, […It is intended to change in a known manner via a user interface (Graphical User Interface, GUI) by a user intervention…]). Regarding claim 8, ZIEHM teaches the method of claim 2, and teaches further comprising the step of applying the first window preset as a default window preset for the one or more 2D images presented on the display (ZIEHM, from the IP.com English Translation, see page 5, paragraphs 6-7). Regarding claim 9, ZIEHM teaches the method of claim 8, and teaches further comprising presenting the second window preset as a selectable icon on the display in association with the one or more 2D images (ZIEHM, from the IP.com English Translation, see page 5, paragraphs 6-7). Regarding claim 10, ZIEHM teaches the method of claim 7, and teaches further comprising the steps of: a. applying a first thickening algorithm for the one or more 2D images with the first window preset (ZIEHM, from the IP.com English Translation, see page 5, paragraphs 6 – page 7, paragraph 8); and b. applying a second thickening algorithm for the one or more 2D images with the second window preset (ZIEHM, from the IP.com English Translation, see page 5, paragraphs 6– page 7, paragraph 8). Regarding claim 11, ZIEHM teaches the method of claim 10, and further teaches wherein the first window preset is a lung window preset und the first thickening algorithm is a maximum intensity projection thickening algorithm (ZIEHM, from the IP.com English Translation, see page 6, paragraphs 5-7, […It is intended to change in a known manner via a user interface (Graphical User Interface, GUI) by a user intervention…], It is inherent that the sectional image could be a lung.) Regarding claim 12, ZIEHM teaches the method of claim 6, and further teaches wherein the step of selecting the at least one portion of the 3D volume comprises selecting a central portion of the 3D volume (ZIEHM, from the IP.com English Translation, see page 6, paragraph 10 – page 7, paragraph 5 [It is envisaged that a user will…out on the 3D view…For better orientation of the user in the 3D volume…). Regarding claim 13, ZIEHM teaches the method of claim 9, and further teaches wherein the 3D volume includes a number of slices, and wherein the step of selecting a central slice of the 3D volume. (ZIEHM, from the IP.com English Translation, see page 6, paragraph 10 – page 7, paragraph 5 [It is envisaged that a user will…out on the 3D view…For better orientation of the user in the 3D volume…). Regarding claim 14, ZIEHM teaches the method of claim 1, and further teaches wherein the radiography imaging system is a C-arm radiography imaging system including a base and a C-arm movably connected to the base, the C-arm including the radiation source and the detector disposed thereon, and wherein the step of operating the radiation source to generate a plurality of projection images of the subject comprises moving the C-arm to position the radiation source and the detector at a number of angular positions relative to the subject (ZIEHM, see fig 1, from the IP.com English Translation, see page 8, paragraphs 5-9). PNG media_image2.png 380 594 media_image2.png Greyscale Regarding claim 15, ZIEHM teaches the method of claim 14, and further teaches wherein the C-arm radiography imaging system is a mobile C-arm radiography imaging system (ZIEHM, see fig 1, see wheels). PNG media_image2.png 380 594 media_image2.png Greyscale Regarding claim 21, ZIEHM teaches the method of claim 2, and further comprising the steps of presenting a 2D image of a portion of the 3D volume employing one of the first window present (ZIEHM, see Figs. 1 and 4, from the IP.com English Translation, see page 6, paragraph 1- page 8, paragraph 9, [The solution to the problem is…on a display device using windowing] [The system generates 2D images (e.g., sectional planes, MPRs, synthetic slices) from a 3D reconstructed volume. The system determines different window presets (window level/width) for different material types or regions within the 3D volume: “when calculating the windowing, a subset of the voxels…windowing values…for viewing/displacing bone structures, soft tissues and/or one or more foreign objects…”]) or the second window preset (ZIEHM, see Figs. 1 and 4, from the IP.com English Translation, see page 6, paragraph 1- page 8, paragraph 9, [The solution to the problem is…on a display device using windowing] [The system applies the selected window preset to the 2D image of the relevant portion (see Description, “windowing can be adapted specifically to the representation of organs or the anatomy, preferably to the individual foreign object components that are of interest for a particular application”). The processed 2D images are presented on a display ([16], [17], [124]]). Claim 16 is rejected for the same reasons as claim 2 above. Claim 17 is rejected for the same reasons as claim 15 above. Claim 20 is rejected for the same reasons as claim 11 above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over ZIEHM IMAGING GMBH (DE 202019002619) (herein after referred to as ZIEHM) in view of Lee et al. “Practical Window Optimization for Medical Image Deep Learning”, arXiv (Cornell University), 1 January 20018 (2018-01-01), XP093193635,DOI: 10:48550/arxiv. 181200571) (herein after referred to as Lee). Regarding claim 3, ZIEHM teaches the method of claim 2, however ZIEHM does not specifically wherein the window generating module is formed at least partially of an artificial intelligence. Lee teaches a method for practical window setting optimization for medical devices that teaches wherein the window generating module is formed at least partially of an artificial intelligence (Lee, Section 1, Introduction teaches the deep learning (artificial intelligence), and 2.2 Window setting optimization modules, teaches how the AI is used in window generating.). These arts are analogous since they are both related to medical imaging devices. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention (AIA ) to modify the invention of ZIEHM to include the deep learning (AI) to do the window optimization as seen in Lee to increase diagnostic accuracy, streamline clinical workflows and improve patient outcomes as mentioned in introduction of Lee. Claim(s) 4, 18 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over ZIEHM IMAGING GMBH (DE 202019002619) (herein after referred to as ZIEHM) in view of WESTERHOFF ET AL. (US 2021/0019933 A1) (herein after referred to as WESTERHOFF). Regarding claim 4, ZIEHM teaches the method of claim 2, however ZIEHM does not explicitly disclose wherein the window generating module is operable to include a first thickening algorithm for the one or more 2D images with the first window preset and a second thickening algorithm for the one or more 2D images with the second window preset. WESTERHOFF teaches a method for rules based display of sets of images that teaches it is well known in the art to apply different rendering styles which include wherein the window generating module (WESTERHOFF, paragraphs [0210]-[0237]{Render server and style rules assign rendering style, window/level, and slice thickness per image set or viewport.}) is operable to include a first thickening algorithm for the one or more 2D images with the first window preset (WESTERHOFF, paragraphs ([0055], [0212], [0216], [0229], [0234], [0236] {Explicitly teaches assigning a thickening algorithm (e.g., MIP, slab, MPR) to a selected image set or viewport and assigning a window preset (window/level)}) and a second thickening algorithm for the one or more 2D images with the second window preset (WESTERHOFF, paragraphs ([0210]-[0237] {Explicitly supports assigning different rendering styles (MIP, slab, MPR, etc.) and window/level settings to different image sets or viewports via rule-based style rules}). These arts are analogous since they are both related to medical imaging devices. Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to combine the windowing and material-specific image presentation techniques of ZIEHM with the thickening algorithm and style rule approaches of WESTERHOFF, in order to further enhance the visualization of specific features or materials in reconstructed 2D and 3D images. The motivation for such a combination is provided by the desire to optimize the display of different anatomical structures or materials, as both references address improving clinical workflow and visualization flexibility. The use of thickening algorithms in conjunction with window presets for different image sets or regions is a well-established technique in the field, as is evidenced by WESTERHOFF. Therefore, the combination of ZIEHM and WESTERHOFF render claim 4 obvious. Claim 18 is rejected for the same reasons as claim 4 above. Claim 19 is rejected for the same reasons as claim 4 above. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUPERVISORY PATENT EXAMINER TWYLER LAMB HASKINS whose telephone number is (571)272-7406. The SUPERVISORY PATENT EXAMINER can normally be reached Monday-Thursday 6:00 am - 4:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the SUPERVISORY PATENT EXAMINER by telephone are unsuccessful, the examiner’s Director, Gregory Toatley can be reached at 571-272-4650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TWYLER L HASKINS/Supervisory Patent Examiner, Art Unit 2639
Read full office action

Prosecution Timeline

Apr 17, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §102, §103
Jul 08, 2025
Response Filed
Mar 05, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12462520
OUTSIDE ENVIRONMENT RECOGNITION DEVICE AND OUTSIDE ENVIRONMENT RECOGNITION METHOD
2y 5m to grant Granted Nov 04, 2025
Patent 12437550
Method for Counting Passengers of a Public Transportation System, Control Apparatus and Computer Program Product
2y 5m to grant Granted Oct 07, 2025
Patent 12412270
METHOD AND SYSTEM FOR DETERMINING PROGRESSION OF ATRIAL FIBRILLATION BASED ON HEMODYNAMIC METRICS
2y 5m to grant Granted Sep 09, 2025
Patent 12149811
CAMERA MODULE HAVING A SOLDERING PORTION COUPLING A DRIVING DEVICE AND A CIRCUIT BOARD
2y 5m to grant Granted Nov 19, 2024
Patent 10880462
MINIATURE VIDEO RECORDER
2y 5m to grant Granted Dec 29, 2020
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
42%
With Interview (-16.0%)
2y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month