Prosecution Insights
Last updated: April 19, 2026
Application No. 18/208,384

SEGMENTATION BASED VISUAL SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM)

Final Rejection §103
Filed
Jun 12, 2023
Examiner
MUKUNDHAN, ROHAN TEJAS
Art Unit
2663
Tech Center
2600 — Communications
Assignee
NEC Corporation Of America
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
9 granted / 9 resolved
+38.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
34
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claim Objections The claim amendments reflecting corrections to the informalities objected to in the prior office action have been noted. Rejection under 35 U.S.C. § 101 In response to the amendment to claim 19, the rejection of claim 19 under 35 U.S.C. § 101 is withdrawn. Rejection under 35 U.S.C. 102(a)(1) In response to the amendment to independent claims 1, 10, and 19, the rejection of claims 1-2, 8, 10-11, 17, and 19 under 35 U.S.C. § 102(a)(1) is withdrawn. However, a new grounds of rejection in light of the amendment is made below. Rejection under 35 U.S.C. § 103 In response to the amendment to independent claims 1, 10, and 19 an of claims 3 and 12 under 35 U.S.C. 103 is withdrawn. However, a new grounds of rejection in light of the amendment is made below. Response to Arguments Applicant’s arguments, filed 07 October 2025 with respect to the prior art rejections of claim 1-19, have all been considered. However, the arguments with respect to the application of the Lee reference are not persuasive. The following are Applicant’s points of contention regarding the Lee reference, and the Examiner’s response: The Lee reference fails to disclose generating pluralities of patches by applying segmentation on images. In addition to Lee col. 12 lines 49-62, cited in the prior office action, the Examiner directs Applicant’s attention to Lee col. 9 line 52- col. 10 line 23. Here, Lee extracting feature information from patches (pixel regions) of multiple images showing different views. The pixel information from each patch is then passed through a feature extractor in order to perform the downstream matching process using distances between feature vectors representative of the patches. In order to extract feature information from pixels within a given patch region (of a plurality of generated patch regions), the patch region must necessarily be defined (segmented) from the remainder of the image. Thus, Lee does disclose generating patches through segmentation by the broadest reasonable interpretation of the language of claim 1. Examiner notes that although Applicant seeks to draw a distinction between the method of the instant application with respect to a semantic or structural segmentation, this is not recited as a claim limitation within the language of any of the independent claims. The Lee reference fails to disclose selecting a group of patches from each plurality of patches, according to parameters characterizing each patch. Lee col. 7 line 55 – col. 8 line 19 (and, for further clarity, col. 9 line 52 – col. 10 line 23) discloses, under the broadest reasonable interpretation, the selection of pixel blocks comprising the patch. Lee discloses wherein multiple patch regions, are utilized within a stereo matching apparatus; wherein the multiple patch regions are selected from a plurality of candidate patch regions, according to characteristics included but not limited to intensity information, present features (borders, edges, corners), or patch colors. Lee in further view of Chen (the new grounds of rejection of the independent claims, as necessitated by the amendment) discloses wherein the parameters comprise at least one of a boundary clarity score, convexity score, [or] a size measure. The Lee reference fails to disclose generating sets of patches from different groups by geometric matching; and The Lee reference fails to disclose calculating a distance vector between pivotal points of patches. Regarding both of these limitations, Lee discloses both Euclidean and Manhattan distances as a geometric matching metric, as is required by the broadest reasonable interpretation of the claim. Furthermore, Lee discloses the use of multiple feature points within each of a plurality of image patches from multiple images being generated and utilized for image matching (Col. 12 lines 20-54). Thus, Lee, through the broadest reasonable interpretation of the language of the claim limitation, discloses this feature of the claimed limitation. The Lee reference fails to disclose generating an estimate of relative camera angles and distances change by applying statistical analysis on the distance vector. Applicant’s assertion that Lee’s neural network’s estimation of relative camera angles using correspondence between two images based on similarity metric between feature vectors is not a statistical measure is unpersuasive. Determination of similarity from the explicitly statistical network of Lee (Col. 6 lines 43-57) falls within the broadest reasonable interpretation of the claim language of the independent claim. Furthermore, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., Lee's output being depth maps or disparity-based pose estimation, not statistical correlation analysis of pivotal points) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Applicant’s arguments directed to the rejections of claims 3 and 12 under 35 U.S.C. § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Chen et al. (“Fingerprint Image Quality Analysis”, In 2004 International Conference on Image Processing, 2004. ICIP'04. (Vol. 2, pp. 1253-1256). IEEE, hereafter “Chen”). Applicant’s arguments directed to the prior art rejections of claims 4-5 and 13-14 in view of Datta have been fully considered. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Here, specifically, Examiner agrees with Applicant that Datta’s disclosure is not directed specifically towards a segmentation-derived patch size used as a selection parameter for geometric matching in multi-view SLAM. However, the rationales for determination of obviousness as laid down in KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007) include a rationale which clearly dictates wherein known work in one field of endeavor may prompt variations of it for use either in the same field or a different one based on design incentives or other market forces if the variations are predictable to the ordinarily skilled artisan. Although the disclosure of Datta is related to the separate field of quantification of image aesthetic quality given statistical analysis of image objects, there are clear commonalities between Lee and Datta which would render it obvious for the ordinarily skilled artisan to combine these references; specifically, both disclose: methods of image analysis within which image elements are vectorized, quantified, and used as features within a learning framework (Col. 10, lines 50-64), extraction of image sub-regions (patches) from images and discerning image objects and feature points of image objects in patches (Col. 5 lines 40-53), and region matching between similar/same image patches (Col. 7 lines 33-49; image similarity/familiarity score, directed to image retrieval). As a result, although not in the exact same field of endeavor (image matching for SLAM applications versus image aesthetic quality quantification), both recite similar learning frameworks for vectorizing image features present in patches for learning-based statistical analysis including region similarity determination. Therefore, the ordinarily skilled artisan would have found it obvious that the disclosure of Dutta with respect to the image size and convexity score could be applicable to the method and system of Lee as a known work in one field of endeavor, which would prompt variation within the method of Lee; more specifically, the inclusion of more diverse features, such as patch size and convexity (as disclosed by Datta, and which would be known by one of ordinary skill in the art) would allow for more identifying features and an improved, less noisy performance of Lee’s method and system of image matching. Claim Objections Claims 1, 10, and 19 are objected to because of the following informalities: Each of claims 1, 10, and 19 recites: “wherein each patch of the first and the second plurality of segmented patches is characterized by segmentation-derived parameters including at least one of clarity score, convexity score, and size measure”. This is grammatically unclear, specifically regarding consideration of the “size measure” parameter. Taken as written, this limitation can be interpreted one of two ways: wherein each patch of the first and the second plurality of segmented patches is characterized by segmentation-derived parameters including at least one of clarity score, convexity score, OR size measure wherein each patch of the first and the second plurality of segmented patches is characterized by segmentation-derived parameters including: size measure; AND at least one of clarity score, convexity score Given the overall structure of the claim, Examiner is interpreting this limitation as a list of alternatives for patch characteristics (interpretation a). Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 8, 10-12, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US Pat. No. 11,132,809, hereafter “Lee”) in view of Chen et al. (“Fingerprint Image Quality Analysis”, In 2004 International Conference on Image Processing, 2004. ICIP'04. (Vol. 2, pp. 1253-1256). IEEE, hereafter “Chen”) . Regarding claim 1, Lee discloses a method for image matching, comprising: receiving a plurality of images (Col. 5 line 57 – col. 6 line 23, wherein the plurality of images consist of multiple stereo camera images, each comprising left-right image pairs); generating a first plurality of segmented patches by applying segmentation on a first image from the plurality of images, and a second plurality of segmented patches by applying segmentation on a second image from the plurality of images, wherein each patch of the first and the second plurality of segmented patches is characterized by segmentation-derived parameters (col. 12, lines 49-62 for the application of segmentation, wherein FAST and SIFT are disclosed as methods for feature extraction; and col. 2 lines 12-22 and col 7 line 28 - col. 8 line 19 for the patch characterization, wherein patch characteristics might be intrinsic features on a pixel- or whole patch scale. Note in particular Col. 7 lines 55-65 discussing pluralities of patches.); selecting a group of patches from each plurality of patches, according to the parameters characterizing each patch (Col. 7 line 55-col. 8 line 19, wherein the patches may be an 8x8 pixel block and wherein patch information, extracted from the reference pixel at the center of the patch, is used as a parameter within the overall method of Lee); generating a plurality of sets, each set comprising at least two patches from at least two different groups of patches by applying a geometric matching between the parameters characterizing each patch (Col. 8, lines 30-52, wherein the geometric matching between parameters is a Euclidean distance between feature vectors representing a reference pixel and a candidate/comparison pixel); calculating a distance vector between a pivotal point of each of the patches in each of the plurality of sets (Col. 3 lines 31-62, wherein the distances measured are between different pivotal points using a triplet neural network, and wherein the network works to minimize this through training to perform matching); and generating an estimate of relative camera angles and distances change by applying a statistical analysis on the distance vector pertaining to each of the plurality of sets (Col. 12 line 23 – 34 and col. 12 line 63 – col. 13 line 52, wherein the analysis of the camera is the determination of the movement and pose transform parameters associated with the camera, which are calculated from the neural network-based matching process, wherein the neural network inputs are the feature vectors extracted from image patches taken from the images). Lee does not disclose wherein the segmentation-derived parameters include at least one of a clarity score, convexity score, and size measure. However, Chen discloses examining a subset of image to characterize it by a clarity score of a present boundary (pgs. 1254-1255, section 2, specifically regarding the quantization of 32x32 image subsets/blocks before classifying clarity based on calculated discernability between ridges and valleys within fingerprint images. Examiner notes that the clarity metric of Chen is based on determined grey level thresholds and could reasonably be applied to general boundary clarity determination within image patches at large). Specifically, Chen discloses a method and system of determining quality of a fingerprint image, including a metric of determining clarity of boundaries in order to determine discernibility of fingerprint ridges and valleys. Regarding rationale for combination, the Supreme Court in KSR Int'l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007) identified a number of rationales to support a conclusion of obviousness which are consistent with the proper "functional approach" to the determination of obviousness as laid down in Graham (See MPEP § 2143). Lee discloses an image matching method including a step for extracting feature information from patches (pixel regions) of multiple images showing different views. The pixel information from each patch is then passed through a feature extractor in order to perform the downstream matching process using distances between feature vectors representative of the patches (See Lee Col. 9 line 52- col. 10 line 23). Lee further discloses wherein feature point information within the patch used for image matching may include (but is not limited to) edge and corner regions, scale-invariant feature transform (SIFT)-obtained features, or features from accelerated segment test (FAST). Thus, Lee explicitly and concretely determines gray level features which may further be statistically analyzed within a histogram to determine inter-image correspondence. Chen’s disclosure of a clarity score for boundaries as a feature would be obvious to integrate into this as the use of a known method within a known system ready for improvement to yield a predictable result (See MPEP § 2143 section I subsection D). In the pursuit of a more robust edge detection method and system, it would have been obvious to the ordinarily-skilled artisan that the boundary clarity score metric disclosed by Chen would allow for the clear determination of an edge, corner, or, more broadly speaking, a gray-level “inflection point” which would serve as a valuable feature for image matching between two different views. Claim 10 is rejected, mutatis mutandis, for reasons similar to claim 1. Lee further discloses a storage (Col. 15 lines 29-41) and at least one processing circuitry (Col. 15 lines 8-41). Claim 19 is rejected, mutatis mutandis, for reasons similar to claim 1. Lee further discloses a software product comprising a non-transitory medium storing thereon computer program instructions for image matching (Col. 15 lines 29-41) that are executed by one or more hardware processors of a computing system (col. 15 lines 8-41). Regarding claims 2 and 11, Lee and Chen disclose all limitations of claims 1 and 10, respectively. Lee further discloses wherein the pivotal point is the centroid of an associated patch from the plurality of patches (Col. 7 line 55 – Col. 8 line 19, wherein the center of the patch is used to determine the representative features and feature vector of the patch). Regarding claims 8 and 17, Lee and Chen disclose all limitations of claims 1 and 10 respectively. Lee further discloses wherein the method for image matching further comprise generating at least one set based on a localized feature descriptor (col. 12, lines 49-62 for the application of segmentation, wherein FAST and SIFT are disclosed as methods for feature extraction; and col. 2 lines 12-22 and col 7 line 28 - col. 8 line 19 for the patch characterization, wherein patch characteristics might be intrinsic features on a pixel- or whole patch scale). Specifically, Lee discloses the use of FAST, or features from accelerated segment test, a high-speed feature detection method wherein a strong corner point is determined to be used as a feature through the application of a circle of pixels. This corner would represent a distinct localized feature, and be used to generate a set for correspondence points according to the method of Lee. Claims 4-5 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Lee as modified above in view of Chen aet al., and further in view of Datta et al. (US Pat. No. 8,755,596, hereafter referred to as Datta). Examiner directs Applicant to the “Response to Arguments” section of the office action for the step-by-step obviousness rationale and response to Applicant’s contentions. Regarding claims 4 and 13, Lee and Chen disclose all limitations of claims 1 and 10, respectively. Lee and Chen do not disclose wherein the parameters comprise a size measure of each patch from the first plurality of patches. However, Datta discloses wherein the parameters comprise a size measure of each patch from the first plurality of patches (Col. 8 line 55 – col. 9 line 34, wherein the size of each segmented patch was calculated as a relative size of the image’s size to be used as a quantitative feature). Specifically, Datta discloses a quantification method for defining the extent of how aesthetic an image is, based on the quantification of key features and their usage within a learning framework to quantify how visually pleasing an image is. Therefore, both Lee modified by Chen and Datta disclose methods of image analysis within which image elements are vectorized, quantified, and used as features within a learning framework. Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to utilize the quantification of the size metric disclosed by Datta within the method and system of Lee modified by Chen as a known work in one field of endeavor, which would prompt variation within the method of Lee modified by Chen; more specifically, the inclusion of more diverse features, such as patch size and convexity (as disclosed by Datta, and which would be known by one of ordinary skill in the art) would allow for more identifying features and an improved, less noisy performance of the method of Lee modified by Chen and system of image matching. Regarding claims 5 and 14, Lee and Chen disclose all limitations of claims 1 and 10, respectively. Lee and Chen do not disclose wherein the parameters comprise a convexity score of each patch from the first plurality of patches. However, Datta discloses wherein the parameters comprise a convexity score of each patch from the first plurality of patches (Col. 10, lines 7-49, wherein Datta discloses a metric for quantitative convexity, which can be employed as a feature for computational feature definition and selection). Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to utilize the convexity metric disclosed by Datta within the method and system of Lee modified by Chen according to the rationale of claim 4. Claims 6, 9, 15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Chen and in further view of Subramaniam et al. (“NCC-Net: Normalized Cross Correlation Based Deep Matcher with Robustness to Illumination Variations”, IEEE WACV, March – May 2018, hereafter referred to as Subramaniam). Regarding claims 6 and 15, Lee and Chen disclose all limitations of claims 1 and 10, respectively. Lee and Chen do not disclose wherein the statistical analysis comprises cross correlation of pivotal points locations. However, Subramaniam discloses statistical analysis comprising cross correlation of pivotal points (pgs. 1945-1946, sections 2.1 and 2.2, wherein normalized cross-correlation is used for analysis of the corresponding feature maps). Specifically, Subramaniam discloses normalized cross-correlation as a matching technique to be used with convolutional neural networks as a high-accuracy, robust patch matching algorithm for image correspondence and reconstruction. Therefore, both Lee modified by Chen and Subramaniam disclose image matching methods which rely on vectorization, comparison, and learning-mediated matching of image patches. Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have used the normalized cross-correlation method of patch matching disclosed by Subramaniam within the method of Lee modified by Chen as the use of Subramaniam’s known technique to improve the similar method and device of Lee modified by Chen by implementing a more robust patch-matching mechanism and metric to mitigate false negative correspondence evaluations. Regarding claims 9 and 18, Lee and Chen disclose all limitations of claims 1 and 10, respectively. Lee further discloses maintaining a map related to relative overlap (Col. 9, lines 25-37, specifically with respect to the disparity map, created wherein differences between corresponding pixels are registered with regards to their overlap with respect to intrinsic camera parameters). Lee and Chen do not disclose computing a probability the second image and the first image depict a same scene in physical space according to the parameters. However, Subramaniam discloses computing a probability the second image and the first image depict a same scene in physical space according to the parameters (Abstract, “we propose to improve the two basic architectures, Siamese networks and Central-Surround stream networks, using robust matching layers for learning the similarities of patches, assisted by a simple cross-entropy loss function”). Specifically, Subramaniam discloses a learning-based (CNN), cross-correlation employing image matching algorithm whose output layer is a SoftMax function which determines the probability of the images matching or not matching. Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have implemented the disclosed probability calculation method of Subramaniam within the method and system of Lee and Chen according to the rationale of claim 6. Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Lee in view of Chen and in further view of Cheda et al. (“Monocular Egomotion Estimation Based on Image Matching”, International Conference on Pattern Recognition Applications and Methods. Vol. 2. SCITEPRESS, 2012, hereafter referred to as Cheda). Regarding claims 7 and 16, Lee and Chen disclose all limitations of claims 1 and 10, respectively. Lee and Chen do not disclose wherein the plurality of images comprising at least three images and the statistical analysis comprising estimating a movement path by at least one of the plurality of sets. However, Cheda discloses wherein the plurality of images comprising at least three images and the statistical analysis comprising estimating a movement path by at least one of the plurality of sets (pgs. 426-430, wherein the images collected are an image stream from a video camera, the algorithm uses detection and segmentation of distant regions for image detection, and integrates pose estimation and translation for path estimation). Specifically, Cheda discloses a method of egomotion estimation consisting of matching consecutive images of an image stream to determine pose differences to calculate motion. Therefore, both Lee modified by Chen and Cheda disclose image matching methods with applications in localization and mapping. Thus, it would have been obvious to one having ordinary skill in the art prior to the effective filing date of the claimed invention to have integrated the path estimation method of Cheda within the disclosure of Lee modified by Chen as a teaching in the prior art that would have led one of ordinary skill to modify the method and system of Lee modified by Chen to achieve the predictable result of a localization and mapping system. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROHAN TEJAS MUKUNDHAN whose telephone number is (571)272-2368. The examiner can normally be reached Monday - Friday 9AM - 6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 5712723838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROHAN TEJAS MUKUNDHAN/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Jul 09, 2025
Non-Final Rejection — §103
Oct 07, 2025
Response Filed
Jan 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602740
UNSUPERVISED LEARNING-BASED SCALE-INDEPENDENT BLUR KERNEL ESTIMATION FOR SUPER-RESOLUTION
2y 5m to grant Granted Apr 14, 2026
Patent 12593827
MONITORING SYSTEM FOR INDIVIDUAL GROWTH MONITORING OF LIVESTOCK ANIMALS
2y 5m to grant Granted Apr 07, 2026
Patent 12586384
Method and Device for Camera-Based Determination of a Distance of a Moving Object in the Surroundings of a Motor Vehicle
2y 5m to grant Granted Mar 24, 2026
Patent 12585252
METHOD FOR AUTOMATICALLY ADJUSTING MANUFACTURING LIMITS PRESCRIBED ON AN ASSEMBLY LINE
2y 5m to grant Granted Mar 24, 2026
Patent 12548294
DETERMINING A DEGREE OF REALISM OF AN ARTIFICIALLY GENERATED VISUAL CONTENT
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month