Prosecution Insights
Last updated: April 19, 2026
Application No. 18/098,009

SYSTEM AND METHOD FOR ROBUST TRACKING OF INDUSTRIAL OBJECTS ACROSS ENVIRONMENTS FROM SMALL SAMPLES IN SINGLE ENVIRONMENTS USING CHROMA-KEY AND OCCLUSION AUGMENTATIONS

Final Rejection §102§103
Filed
Jan 17, 2023
Examiner
SHERRILLO, DYLAN JOSEPH
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Genesee Valley Innovations LLC
OA Round
2 (Final)
91%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
39 granted / 43 resolved
+28.7% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
14 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
46.9%
+6.9% vs TC avg
§102
42.3%
+2.3% vs TC avg
§112
2.5%
-37.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 43 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed relating to claims 1-20 have been fully considered but they are not persuasive. Regarding independent claims 1, 9 and 17-20, the applicant states that “The Chakraborty reference does not disclose all the limitations of independent claim 1. For example, Chakraborty does not disclose the limitations associated with capturing an image and then "augmenting the image by background substitution," as set forth in independent claim 1 and the related limitations of independent claims 9 and 17-20.”. The examiner respectfully disagrees. The examiner understands the main argument to revolve around the definition of substitution as the applicant claims “However, projecting backgrounds between 2D and 3D space as disclosed in Chakraborty is merely "adjusting" or "translating" a background. "Substitution" involves completely replacing one thing with another… Substitution does not involve merely changing or adjusting a thing. Therefore, Chakraborty's disclosed adjusting or translating a background between 2D and 3D space merely involves amending an existing background. There is no replacement of the existing background in Chakraborty. In other words, the projecting of backgrounds between 2D and 3D space as described in Chakraborty does not involve any substitution of an initial background with a different background.”. The examiner interprets “background substitution” as the replacement of the background of an image with something else other than its previous image. The replacement of the original background with an altered version of that background would be a form of background substitution under the broadest reasonable interpretation. In the applicant’s invention, background substitution is described as an augmentation of the original image in paragraphs 6-8 and 13-16 of the application revolving around continuous alpha blending, introduction of random noise, and the image cropped, translated, and resampled at multiple scales. A reprojection of the original background to a new projection could be interpreted of that as a new background applied to the image. The claim language of “augment the image by background substitution” with a broadest responsible interpretation can include any change applied to the background compared to the original image, not that the background must originate from a completely different source. Based on these interpretations of the claim language and what is described in the applicant’s specification, Chakraborty discloses the embodiments of the independent claims. Status of Claims Claims 1, 6-9, and 14-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chakraborty (US 10818028 B2). Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Chakraborty (US 10818028 B2) in view of Pettigrew (US 8743139 B2). Claims 3-5 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Chakraborty (US 10818028 B2) in view of Garcia-Peraza-Herrera (NPL: Image Compositing for Segmentation of Surgical Tools Without Manual Annotations). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 6-9 and 14-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chakraborty (US 10818028 B2). Regarding Claim 1: Chakraborty teaches: A system for creating data sets for robust object and action detectors from a small number of images gathered in one environment, the system comprising (Col 1. Lines 25-36, “Examples are disclosed that relate to classifying objects using geometric context. In one example, a computing system is configured to train an object classifier. Monocular image data and ground-truth data are received for a scene. Geometric context is determined including a three-dimensional camera position relative to a fixed plane. Regions of interest (RoI) and a set of potential occluders are identified within the image data. For each potential occluder, an occlusion zone is projected onto the fixed plane in three-dimensions. A set of occluded RoIs on the fixed plane are generated for each occlusion zone. Each occluded RoI is projected back to the image data in two-dimensions.”): at least one processor (Col 15. Lines 8-11, “The logic machine may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions.”); and at least one memory having stored therein instructions, the memory and instructions being configured such that execution of the instructions by the processor causes the system to (Col 15. Lines 22-25, “Storage machine 1220 includes one or more physical devices configured to hold instructions executable by the logic machine to implement the methods and processes described herein.”): capture an image having at least one object therein, wherein the image has a background with at least one identifiable property (Col 10. Lines 17-22, “As an example, foreground, background, and occluded regions of interest may be sampled to generate a set of each type of RoI. Minimizing the loss function may then include modeling object position and scale consistency during per region loss computations for each of foreground, background and occluded regions of interest.”); augment the image by background substitution (Col 10. Lines 25-30, “At 755, training the classifier includes minimizing location errors of each region of interest and each occluded region of interest of the set of the fixed plane based on the ground-truth data. Minimizing location errors may include reprojecting each of foreground, background, and occluded regions of interest and ground-truth back onto the 3D space.”); apply a bounding box around the object (Col 10. Lines 62-64, “For example, each region of interest may be designated by drawing a bounding box on the monocular image data for each region of interest.”); and, introduce random foreground occlusions to the object (Col 6. Lines 29-33, “Plot 310 represents anchor placement for occluding objects. To determine where to place the anchors, in some examples, once an occluder is identified, an occlusion zone may be generated, and then multiple RoIs may be proposed based on the occlusion zone.”), whereby augmented image data is obtained (Col 5. Lines 47-55, “A sampling step may occur prior to performing the loss function. In some examples, this may include randomized stratified sampling. Subsets of foreground, background, & occluded regions are sampled for the loss function. After the loss function is minimized, the data may be fed back to RPN 235, which may include a class agnostic stage where it is determined whether or not the RoI includes an object, followed by a stage that includes class-specific bounding box regression.”). Regarding Claim 6: Chakraborty teaches: The system of claim 1 where the foreground occlusions comprise rectangles of various sizes sampled within a ground truth bounding box of the object in ground truth labels (Col. 7 Lines 27-29). Regarding Claim 7: Chakraborty teaches: The system of claim 1 wherein the foreground occlusions comprise a grid of objects including at least one of circles, squares or lines (Col 6. Lines 19-28 and Figures 3 and 6 show grids of objects). Regarding Claim 8: Chakraborty teaches: The system as set forth in claim 1 wherein the memory and instructions are further configured such that execution of the instructions by the processor causes the system to train a detecting or tracking system based on the augmented image data (Col 12. Lines 1-5). Regarding Claim 9: Chakraborty teaches: A method for creating robust object and action detectors from a small number of images gathered in one environment, the method comprising (Col 1. Lines 25-36, “Examples are disclosed that relate to classifying objects using geometric context. In one example, a computing system is configured to train an object classifier. Monocular image data and ground-truth data are received for a scene. Geometric context is determined including a three-dimensional camera position relative to a fixed plane. Regions of interest (RoI) and a set of potential occluders are identified within the image data. For each potential occluder, an occlusion zone is projected onto the fixed plane in three-dimensions. A set of occluded RoIs on the fixed plane are generated for each occlusion zone. Each occluded RoI is projected back to the image data in two-dimensions.”): capturing an image having at least one object therein, wherein the image has a background with at least one identifiable property (Col 10. Lines 17-22, “As an example, foreground, background, and occluded regions of interest may be sampled to generate a set of each type of RoI. Minimizing the loss function may then include modeling object position and scale consistency during per region loss computations for each of foreground, background and occluded regions of interest.”); augmenting the image by background substitution (Col 10. Lines 25-30, “At 755, training the classifier includes minimizing location errors of each region of interest and each occluded region of interest of the set of the fixed plane based on the ground-truth data. Minimizing location errors may include reprojecting each of foreground, background, and occluded regions of interest and ground-truth back onto the 3D space.”); applying a bounding box around the object (Col 10. Lines 62-64, “For example, each region of interest may be designated by drawing a bounding box on the monocular image data for each region of interest.”); and, introducing random foreground occlusions to the object (Col 6. Lines 29-33, “Plot 310 represents anchor placement for occluding objects. To determine where to place the anchors, in some examples, once an occluder is identified, an occlusion zone may be generated, and then multiple RoIs may be proposed based on the occlusion zone.”), whereby augmented image data is obtained (Col 5. Lines 47-55, “A sampling step may occur prior to performing the loss function. In some examples, this may include randomized stratified sampling. Subsets of foreground, background, & occluded regions are sampled for the loss function. After the loss function is minimized, the data may be fed back to RPN 235, which may include a class agnostic stage where it is determined whether or not the RoI includes an object, followed by a stage that includes class-specific bounding box regression.”). Regarding Claim 14: Chakraborty teaches: The method of claim 9 wherein the foreground occlusions comprise rectangles of various sizes sampled within ground truth bounding boxes of objects (Col. 7 Lines 27-29). Regarding Claim 15: Chakraborty teaches: The method of claim 9 wherein the foreground occlusions comprise a grid of objects including at least one of circles, squares or lines (Col 6. Lines 19-28 and Figures 3 and 6 show grids of objects). Regarding Claim 16: Chakraborty teaches: The method as set forth in claim 9 further comprising training a detecting or tracking system based on the augmented image data (Col 12. Lines 1-5). Regarding Claim 17: Chakraborty teaches: A system for robust object and action detectors from a small number of images gathered in one environment, the system comprising (Col 1. Lines 25-36, “Examples are disclosed that relate to classifying objects using geometric context. In one example, a computing system is configured to train an object classifier. Monocular image data and ground-truth data are received for a scene. Geometric context is determined including a three-dimensional camera position relative to a fixed plane. Regions of interest (RoI) and a set of potential occluders are identified within the image data. For each potential occluder, an occlusion zone is projected onto the fixed plane in three-dimensions. A set of occluded RoIs on the fixed plane are generated for each occlusion zone. Each occluded RoI is projected back to the image data in two-dimensions.”): at least one processor (Col 15. Lines 8-11, “The logic machine may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions.”); and at least one memory having stored therein instructions, the memory and instructions being configured such that execution of the instructions by the processor causes the system to (Col 15. Lines 22-25, “Storage machine 1220 includes one or more physical devices configured to hold instructions executable by the logic machine to implement the methods and processes described herein.”): capture the environment (Col 10. Lines 17-22, “As an example, foreground, background, and occluded regions of interest may be sampled to generate a set of each type of RoI. Minimizing the loss function may then include modeling object position and scale consistency during per region loss computations for each of foreground, background and occluded regions of interest.”); and, detect objects in the environment based using a model trained on data augmented by background substitution and foreground occlusions (Col 17. Lines 24-36, “In any of the preceding examples, or any other example, the instructions may additionally or alternatively be executable to minimize the loss function by modeling object position and scale consistency during per region loss computations for each of foreground, background, and occluded regions of interest. In any of the preceding examples, or any other example, minimizing location errors may additionally or alternatively include reprojecting each of foreground, background, and occluded regions of interest and ground-truth back onto the 3D space. In any of the preceding examples, or any other example, the classifier may additionally or alternatively comprise one or more proposal-based deep neural networks.”). Regarding Claim 18: Chakraborty teaches: A method for robust object and action detectors from a small number of images gathered in one environment, the method comprising: capturing the environment (Col 10. Lines 17-22, “As an example, foreground, background, and occluded regions of interest may be sampled to generate a set of each type of RoI. Minimizing the loss function may then include modeling object position and scale consistency during per region loss computations for each of foreground, background and occluded regions of interest.”); and, detecting objects in the environment based using a model trained on data augmented by background substitution and foreground occlusions (Col 17. Lines 24-36, “In any of the preceding examples, or any other example, the instructions may additionally or alternatively be executable to minimize the loss function by modeling object position and scale consistency during per region loss computations for each of foreground, background, and occluded regions of interest. In any of the preceding examples, or any other example, minimizing location errors may additionally or alternatively include reprojecting each of foreground, background, and occluded regions of interest and ground-truth back onto the 3D space. In any of the preceding examples, or any other example, the classifier may additionally or alternatively comprise one or more proposal-based deep neural networks.”). Regarding Claim 19: Chakraborty teaches: A non-transitory computer readable medium having instructions stored thereon that, when executed, cause an apparatus to perform: loading an image having at least one object therein, wherein the image has a background with at least one identifiable property (Col 10. Lines 17-22, “As an example, foreground, background, and occluded regions of interest may be sampled to generate a set of each type of RoI. Minimizing the loss function may then include modeling object position and scale consistency during per region loss computations for each of foreground, background and occluded regions of interest.”); augmenting the image by background substitution (Col 10. Lines 25-30, “At 755, training the classifier includes minimizing location errors of each region of interest and each occluded region of interest of the set of the fixed plane based on the ground-truth data. Minimizing location errors may include reprojecting each of foreground, background, and occluded regions of interest and ground-truth back onto the 3D space.”); applying a bounding box around the object (Col 10. Lines 62-64, “For example, each region of interest may be designated by drawing a bounding box on the monocular image data for each region of interest.”); and, introducing random foreground occlusions to the object (Col 6. Lines 29-33, “Plot 310 represents anchor placement for occluding objects. To determine where to place the anchors, in some examples, once an occluder is identified, an occlusion zone may be generated, and then multiple RoIs may be proposed based on the occlusion zone.”), whereby augmented image training data is obtained, up to multiple times to augment the same image multiple times (Col 5. Lines 47-55, “A sampling step may occur prior to performing the loss function. In some examples, this may include randomized stratified sampling. Subsets of foreground, background, & occluded regions are sampled for the loss function. After the loss function is minimized, the data may be fed back to RPN 235, which may include a class agnostic stage where it is determined whether or not the RoI includes an object, followed by a stage that includes class-specific bounding box regression.”). Regarding Claim 20: Chakraborty teaches: A non-transitory computer readable medium having instructions stored thereon that, when executed, cause an apparatus to perform: capturing an environment (Col 10. Lines 17-22, “As an example, foreground, background, and occluded regions of interest may be sampled to generate a set of each type of RoI. Minimizing the loss function may then include modeling object position and scale consistency during per region loss computations for each of foreground, background and occluded regions of interest.”); and, detecting objects in the environment based on a model trained on data augmented by background substitution and foreground occlusions (Col 17. Lines 24-36, “In any of the preceding examples, or any other example, the instructions may additionally or alternatively be executable to minimize the loss function by modeling object position and scale consistency during per region loss computations for each of foreground, background, and occluded regions of interest. In any of the preceding examples, or any other example, minimizing location errors may additionally or alternatively include reprojecting each of foreground, background, and occluded regions of interest and ground-truth back onto the 3D space. In any of the preceding examples, or any other example, the classifier may additionally or alternatively comprise one or more proposal-based deep neural networks.”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Chakraborty (US 10818028 B2) in view of Pettigrew (US 8743139 B2). Regarding Claim 2: Chakraborty teaches the limitations of claim 1 as applied above. Chakraborty does not explicitly teach the following; however, in related art, Pettigrew teaches: wherein the background substitution comprises continuous alpha blending between the image and the substitution based on an angular distance in hue space between pixels in the background of a target image and a reference chroma value (Figures 38-41). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Pettigrew’s system of alpha blending between an image and the substituted background for image correction using a hue space and chorma value with Chakraborty’s system for identifying objects using image background substitution. Regarding Claim 10: Chakraborty teaches the limitations of claim 9 as applied above. Chakraborty does not explicitly teach the following; however, in related art, Pettigrew teaches: wherein the background substitution comprises continuous alpha blending between the image and the substitution based on an angular distance in hue space between pixels in the background of a target image and a reference chroma value (Figures 38-41). Claim(s) 3-5 and 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chakraborty (US 10818028 B2) in view of Garcia-Peraza-Herrera (NPL: Image Compositing for Segmentation of Surgical Tools Without Manual Annotations). Regarding Claim 3: Chakraborty teaches the limitations of claim 1 as applied above. Chakraborty does not explicitly teach the following; however, in related art, Garcia-Peraza-Herrera teaches: wherein the background substitution comprises random noise at multiple scales (Garcia-Peraza-Herrera, Page 8, Section A. Data Augmentation, Paragraph 2, Uses noise such as multiplicative, Gaussian, and ISO). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Garcia-Peraza-Herrera image segmentation and labeling system that utilizes random noise generation for training with Chakraborty’s system for identifying objects using background substitution. Regarding Claim 4: Chakraborty teaches the limitations of claim 1 as applied above. Chakraborty does not explicitly teach the following; however, in related art, Garcia-Peraza-Herrera teaches: wherein the background substitution comprises natural images cropped, translated and resampled at multiple scales (Garcia-Peraza-Herrera, Page 8, Section A. Data Augmentation, Paragraph 2, Tools are used for augmentation where foreground/background, and blended images are randomly zoomed, rotated, flipped, and shifted). Regarding Claim 5: Chakraborty teaches the limitations of claim 1 as applied above. Chakraborty does not explicitly teach the following; however, in related art, Garcia-Peraza-Herrera teaches: The system of claim 1 wherein the foreground occlusions comprise curtain occlusions that partially obscure left, right, top or bottom of objects to random degrees (Garcia-Peraza-Herrera, Page 8, Section A. Data Augmentation, Paragraph 2, Padding and image augmentations are random rotations of 90 degrees). Regarding Claim 11: Chakraborty teaches the limitations of claim 9 as applied above. Chakraborty does not explicitly teach the following; however, in related art, Garcia-Peraza-Herrera teaches: The method of claim 9 wherein the background substitution comprises random noise at multiple scales (Garcia-Peraza-Herrera, Page 8, Section A. Data Augmentation, Paragraph 2, Uses noise such as multiplicative, Gaussian, and ISO). Regarding Claim 12: Chakraborty teaches the limitations of claim 9 as applied above. Chakraborty does not explicitly teach the following; however, in related art, Garcia-Peraza-Herrera teaches: The method of claim 9 wherein the background substitution comprises natural images cropped, translated, and resampled at multiple scales (Garcia-Peraza-Herrera, Page 8, Section A. Data Augmentation, Paragraph 2, Tools are used for augmentation where foreground/background, and blended images are randomly zoomed, rotated, flipped, and shifted). Regarding Claim 13: Chakraborty teaches the limitations of claim 9 as applied above. Chakraborty does not explicitly teach the following; however, in related art, Garcia-Peraza-Herrera teaches: The method of claim 9 wherein the foreground occlusions comprise curtain occlusions that partially obscure left, right, top or bottom of objects (Garcia-Peraza-Herrera, Page 8, Section A. Data Augmentation, Paragraph 2, Padding and image augmentations are random rotations of 90 degrees). Relevant Prior Art Directed to State of Art YANG (GB 2611167 A) Hinterstoisser (US 11741666 B2) Chen (US 12190588 B2) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN J SHERRILLO whose telephone number is (703)756-5605. The examiner can normally be reached 1st Week of Bi-week Monday - Thursday 10am - 7:30pm EST, 2nd Week of Bi-week Monday-Thursday 10am - 7:30pm EST Friday 10am-6:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen R Koziol can be reached at (408) 918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.J.S./Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Jan 17, 2023
Application Filed
Jul 25, 2025
Non-Final Rejection — §102, §103
Nov 29, 2025
Response Filed
Mar 04, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591907
SYSTEM AND METHOD TO DETECT A GAZE AT AN OBJECT BY UTILIZING AN IMAGE SENSOR
2y 5m to grant Granted Mar 31, 2026
Patent 12579798
IMAGE PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12567166
DEVICE FOR PROCESSING IMAGE AND OPERATING METHOD THEREOF
2y 5m to grant Granted Mar 03, 2026
Patent 12541825
MODEL TRAINING METHOD, IMAGE PROCESSING METHOD, COMPUTING AND PROCESSING DEVICE AND NON-TRANSIENT COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12530826
CORRECTION OF ARTIFACTS OF TOMOGRAPHIC RECONSTRUCTIONS BY NEURON NETWORKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+11.8%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 43 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month