Prosecution Insights
Last updated: April 19, 2026
Application No. 18/222,276

METHOD FOR GENERATING LAND-COVER MAPS

Non-Final OA §103
Filed
Jul 14, 2023
Examiner
MCCULLEY, RYAN D
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Leica Geosystems AG
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
344 granted / 493 resolved
+7.8% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
524
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
51.6%
+11.6% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 493 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09 December 2025 has been entered. Response to Arguments Applicant's arguments filed 09 December 2025 have been fully considered but they are not persuasive. Applicant argues “cited paragraph 79 [of Gu] does not concern a semantic segmentation at all … The alleged motivation found in paragraph 92 does therefore not apply to the teachings of paragraph 79” (Remarks, pg. 8). The Examiner respectfully disagrees. In Gu, localization and semantic segmentation are closely linked and carried out together. For example, while para. 79 of Gu describes the visualization of the localization results, it also describes visualization of semantic segmentation results: “colors the grid based on an integrated score, which is defined as the element-wise product of confidence and class probability obtained from two prediction maps of the model” (para. 79). Gu earlier recites “the algorithm will predict two maps … The second map is the distribution of class probability” (paras. 77-78). Semantic segmentation is the separation of an image into different regions based on a classification of the regions. “Class probability” means the likelihood that a certain region belongs to a particular semantic class. Para. 79 of Gu describes the colorization of an image based on a class probability, which means that it is referring to semantic segmentation. Therefore, the “element-wise product of confidence and class probability” described in para. 79 of Gu involves semantic segmentation, and the motivation found in para. 92 of Gu is applicable, notwithstanding the fact that a coarse bounding-box localization facilitates this calculation. Applicant argues “Gu does not disclose or suggest that its confidence and class probability is applied on pixel-level. Instead, it is suggested that the confidence and class probability is applied on a bounding-box-level … The disclosed method in Gu therefore operates on an entirely different resolution level than the claimed method” (Remarks, pg. 8). The Examiner respectfully disagrees. As a first matter, Gu does disclose pixel-wise semantic segmentation: “The basic idea of semantic segmentation involves … generate a pixel-wise prediction” (para. 80) and “In semantic segmentation, the aim is to understand the image in pixel-wise level and segment the areas for different categories” (para. 75). Therefore, the “class probability” of para. 79 of Gu likely refers to a pixel-wise semantic classification probability. As a second matter, Applicant’s argument does not correspond to the actual scope of the claim. Claim 1 recites “identifying … a set of single-image probability values of one or more of the semantic classes for at least a subset of the image pixels” and “assigning to at least a subset of pixels of the one or more land-cover maps one or more overall probability values.” The size of the claimed “set” of single-image probability values is unspecified, and the size of the “subset” of pixels is unspecified. These claim limitations do not recite or require an individual probability or confidence value to be calculated for each pixel. Instead, they merely require some unspecified number of probability values to be calculated for a subset of pixels of unspecified size, which could include calculating a single probability value for each region of an image. Therefore, even if para. 79 of Gu is considered to recite calculating probability and confidence values for regions or bounding boxes in an image as opposed to pixels, this teaches the scope of the recited claim limitations. As a third matter, even if the claim is narrowed to specify that a single-image probability value and a confidence value is determined for each pixel of an image, and the product of the two is calculated for each pixel of an image, this would likely be rendered obvious by the combination of references. This is because the primary reference Rong teaches calculating single-image probability values for each pixel of an image, and Gu teaches that a product of a probability value and a confidence value is superior to a probability value alone. When the teaching of Gu (that it is beneficial to use a probability value multiplied by a confidence value instead of using only a probability value) is applied to the per-pixel probability values of Rong, the combination would teach per-pixel products of probability values and confidence values. Applicant argues “the localization method is seen as a different method and further teaches that it is inferior to the semantic segmentation … No apparent motivation can be found in Gu to apply teachings relating to the localization method to the semantic segmentation” (Remarks, pg. 9). The Examiner respectfully disagrees. As discussed above, the localization and semantic segmentation of Gu are intimately linked together. Applicant’s characterization of “apply teachings relating to the localization method to the semantic segmentation” in Gu is not accurate because it is all one method that includes coarse localization followed by finer classification. At the end of these steps, each pixel of Gu is semantically classified, and therefore the entirely of Gu can be considered a semantic segmentation method. It would be more accurate to say that Gu teaches that a probability value multiplied by a confidence value is better than a probability value alone, which can be applied to the probability values of Rong, and this is motivated by para. 92 of Gu which describes the accuracy of the method. Any remaining arguments are considered moot based on the foregoing. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 5, 8, 9, 11-14, 16, 21, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Rong et al. (“3D Semantic Labeling of Photogrammetry Meshes Based on Active Learning;” hereinafter “Rong”) in view of Zhu et al. (CN 114494576 A; hereinafter “Zhu”; machine translation provided for citations), and further in view of Gu et al. (US 2022/0281177; hereinafter “Gu”). Regarding claim 1, Rong discloses A computer-implemented method for generating one or more land-cover maps of an area (“generate large scale city models from massive aerial images,” pg. 3550, sec. I, para. 1) using semantic segmentation (“image segmentation,” pg. 3552, sec. B, para. 1; “semantic classes,” para. 3555, col. 1, para. 1), the method comprising, in a computer system, receiving a plurality of digital input images, each input image imaging at least a part of the area and comprising a multitude of image pixels, each input image being captured by one of a plurality of cameras from a known position and with a known orientation relative to a common coordinate system (“Taking as input … the captured images with their calibrated intrinsic and extrinsic parameters,” pg. 3552, sec. A, para. 1); performing semantic segmentation in the input images, segmenting each image individually and with a plurality of semantic classes (“2D image segmentation,” pg. 3552, sec. B, para. 1), each semantic class being related to a land-cover class from a set of land-cover classes (“semantic classes on these two datasets: road, vegetable, building,” para. 3555, col. 1, para. 1); and identifying, in each of the segmented images and based on the semantic segmentation, a set of single-image probability values of one or more of the semantic classes for at least a subset of the image pixels of the respective segmented image (“output the probability that each pixel corresponds to each label, next called probability map,” para. 3552, sec. B, para. 2), generating a 3D mesh of the area based on the plurality of digital input images (“a 3D mesh model computed from image based 3D reconstruction system,” para. 3552, sec. A, para. 1) using a structure-from-motion algorithm (“open-source [4]–[7] 3D reconstruction and photogrammetry softwares,” pg. 3550, sec. I, para. 1; reference [4] in the bibliography is titled “Structure-from-motion revisited”); projecting the sets of single-image probability values of each segmented image on vertices of the 3D mesh (“given a set of calibrated cameras, we can easily calculate the correspondence between the pixels of the images and the facets of the mesh model by ray intersection,” pg. 3553, sec. 1, para. 1); weighting the sets of single-image probability values of each segmented image; determining a set of overall probability values of one or more of the semantic classes using the weighted sets of single-image probability values (“the simplest weighted-average method will be utilized to unify the per-pixel class scores,” pg. 3553, sec. C, para. 2); and assigning to at least a subset of pixels of the one or more land-cover maps one or more overall probability values of the set of overall probability values (“After optimization, we can acquire a semantic 3D mesh model in which each facet has a semantic label with its confidence, a value from 0 to 1 representing the reliability of the given label. All these confidences yield a heat model,” pg. 3553, sec. 4, para. 1; Fig. 1, right column, illustrates a segmented land-cover map and associated probability values). Rong does not specifically disclose weighting values based on an angle between the 3D mesh and the known orientation of the camera by which the respective input image has been captured. In the same art of 3D reconstruction, Zhu discloses mapping data from a plurality of input viewpoints to a mesh (“each input image is mapped to the predicted face surface according to the camera parameters,” para. 20), and combining the values from individual viewpoints by weighting the values based on an angle between the 3D mesh and the known orientation of the camera by which the respective input image has been captured (“the weight of each pixel of each image contributing to the texture is calculated … the value of the visible area is the cosine value of the angle between the normal of the model vertex corresponding to the point and the direction from the point to the center of the camera,” para. 20). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Zhu to Rong. The motivation would have been to increase accuracy by weighting viewpoints having higher quality, frontal views more heavily. The combination of Rong and Zhu does not disclose wherein the weighting comprises: assigning a confidence value to each set of single-image probability values, and wherein the weighted set of single-image probability values is calculated by multiplying the respective set of single-image probability values and the confidence value. In the same art of semantic segmentation, Gu teaches assigning a confidence value to each set of single-image probability values, and … multiplying the respective set of single-image probability values and the confidence value (“In semantic segmentation, the aim is to understand the image in pixel-wise level and segment the areas for different categories,” para. 75; “colors the grid based on an integrated score, which is defined as the element-wise product of confidence and class probability,” para. 79). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the confidence and probability multiplication teachings of Gu to the weighted set of single-image probability values calculation of Rong and Zhu. The motivation would have been “the semantic segmentation model is able to detect the category correctly and accurately at the pixel level” (Gu, para. 92). Regarding claim 2, the combination of Rong, Zhu, and Gu renders obvious assigning a graphical indicator to each land-cover class of at least a subset of the land-cover classes; and displaying the one or more land-cover maps with the assigned graphical indicators on a screen (Rong, Fig. 1). Regarding claim 4, the combination of Rong, Zhu, and Gu renders obvious wherein the one or more land-cover maps comprise at least a combined land-cover map showing the most probable land-cover class for every pixel of the map; and/or one or more per-class land-cover maps showing the probability of one land-cover class for every pixel of the map (Rong, Fig. 1). Regarding claim 5, the combination of Rong, Zhu, and Gu renders obvious wherein the one or more land-cover maps comprise at least one 2D land-cover map that is generated based on the 3D mesh (Fig. 1(d) of Rong illustrates a 2D land-cover map generated from the disclosed 3D semantic mesh). Regarding claim 8, the combination of Rong, Zhu, and Gu renders obvious wherein the one or more land-cover maps comprise at least one 3D model of the area that is generated based on the 3D mesh (“After optimization, we can acquire a semantic 3D mesh model in which each facet has a semantic label with its confidence, a value from 0 to 1 representing the reliability of the given label,” Rong, pg. 3553, sec. 4, para. 1). Regarding claim 9, the combination of Rong, Zhu, and Gu renders obvious receiving an orthoimage of the area, wherein the pixels of the land-cover map correspond at least to a subset of the pixels of the orthoimage; and/or the plurality of cameras is selected based on the orthoimage (“the orthoimage of the semantic model,” Rong, pg. 3556, sec. C, para. 2). Regarding claim 11, the combination of Rong, Zhu, and Gu renders obvious wherein the method comprises receiving depth information and using the depth information for generating the 3D mesh (“open-source [4]–[7] 3D reconstruction and photogrammetry softwares,” Rong, pg. 3550, sec. I, para. 1; reference [4] in the bibliography of Rong is titled “Structure-from-motion revisited”; both “photogrammetry” and “structure from motion” both generate depth data from feature correspondences of overlapping images). Regarding claim 12, the combination of Rong, Zhu, and Gu renders obvious wherein the semantic segmentation in the input images is performed using artificial intelligence and a trained neural network (“a semantic segmentation network … active learning,” Rong, pg. 3552, sec. B, para. 1; “CNN-based segmentation,” Rong, pg. 3553, sec. 1, para. 1). Regarding claim 13, the combination of Rong, Zhu, and Gu renders obvious wherein the weighting comprises weighting probabilities of a set of single-image probability values the higher, the more acute the angle of an image axis of the input image of the respective set of single-image probability values is relative to the 3D mesh at a surface point of the 3D mesh onto which the set of single-image probability values is projected (“the weight of each pixel of each image contributing to the texture is calculated accordingly … the value of the visible area is the cosine value of the angle between the normal of the model vertex corresponding to the point and the direction from the point to the center of the camera,” Zhu, para. 20; see claim 1 for motivation to combine). Regarding claim 14, the combination of Rong, Zhu, and Gu renders obvious A computer system comprising a processing unit and a data storage unit, wherein the data storage unit is configured to receive and store input data, to store one or more algorithms, and to store and provide output data (these are all implicit features of “computer vision” and “image processing” systems such as those disclosed in Rong and Zhu), the algorithms comprising at least a structure-from-motion algorithm (“open-source [4]–[7] 3D reconstruction and photogrammetry softwares,” Rong, pg. 3550, sec. I, para. 1; reference [4] in the bibliography of Rong is titled “Structure-from-motion revisited”), wherein the processing unit is configured to generate, based on the input data and using the algorithms, at least one land-cover map of an area as output data by performing the method according to claim 1 [and 13] (Rong, Fig. 1). Regarding claim 16, the combination of Rong, Zhu, and Gu renders obvious A computer program product comprising program code which is stored on a non-transitory machine-readable medium, and having computer-executable instructions for performing the method according to claim 1 [and claim 13] (these are all implicit features of “computer vision” and “image processing” systems such as those disclosed in Rong, Zhu, and Gu). Regarding claim 21, the combination of Rong, Zhu, and Gu renders obvious wherein the 3D-model shows the most probable land-cover class (“assign a most likely category lf to each facet,” Rong, pg. 3553, sec. C, para. 2). Regarding claim 22, the combination of Rong, Zhu, and Gu renders obvious wherein at least a subset of the cameras is embodied as a stereo camera (“photogrammetry 3D meshes,” Rong, pg. 3550, sec. I, para. 1; “a set of calibrated cameras,” Rong, pg. 3553, sec. C, para. 2) or as a range-imaging camera and configured to provide the depth information (optional). Claims 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rong, Zhu, and Gu, and further in view of Makovsky (US 2018/0113581). Regarding claim 3, the combination of Rong, Zhu, and Gu renders obvious wherein: a plurality of land-cover maps are generated for the same area (Rong, Fig. 5). The combination of Rong, Zhu, and Gu does not disclose a user input is received, the user input comprising selecting one of the plurality of land-cover maps to be displayed, and the selected land-cover map is displayed. In the same art of displaying geographical maps, Makovsky teaches a user input is received, the user input comprising selecting one of the plurality of land-cover maps to be displayed, and the selected land-cover map is displayed, particularly wherein indicators of selectable land-cover maps of the plurality of land-cover maps are displayed and the user input comprises selecting one of the selectable land-cover maps (“a geographical map at a first level of geographical abstraction,” para. 93; “a geographical map at a second level of geographical abstraction,” para. 96; “The graphical user interface can include a frame for displaying a presently selected map and user interface elements for allowing a user of the client to toggle between the maps,” para. 97). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Makovsky to the combination of Rong, Zhu, and Gu. The motivation would have been to provide additional control to a user. Regarding claim 18, the combination of Rong, Zhu, Gu, and Makovsky renders obvious wherein indicators of selectable land-cover maps of the plurality of land-cover maps are displayed and the user input comprises selecting one of the selectable land-cover maps (“The graphical user interface can include a frame for displaying a presently selected map and user interface elements for allowing a user of the client to toggle between the maps,” para. 97; see claim 3 for motivation to combine). Claims 6, 7, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rong, Zhu, and Gu, and further in view of Wilson et al. (US 2011/01097179; hereinafter “Wilson”). Regarding claim 6, the combination of Rong, Zhu, and Gu does not disclose wherein for each pixel of the 2D land-cover map, a ray is created that runs in vertical direction from the respective pixel through the 3D mesh, the ray crossing a surface of the 3D mesh at one or more crossing points. In the same art of geographical 3D modeling, Wilson teaches wherein for each pixel of the 2D land-cover map, a ray is created that runs in vertical direction from the respective pixel through the 3D mesh, the ray crossing a surface of the 3D mesh at one or more crossing points (“a three-dimensional model of terrain … the projected image is sampled from nadir view 202 [of Fig. 2]. Each sampled point extends from a parallel ray (such as a ray 208) extending from nadir view 202. For example, point 108 on the projected image appears at point 204 on the orthorectified image … to determine the pixel at point 204 on viewport 202, ray 208 is extended to determine intersection point 108 on terrain 104,” paras. 31-32). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Wilson to the assigning of probability values of the combination of Rong, Zhu, and Gu. The motivation would have been that it “makes it easier to overlay map data” (Wilson, para. 33) and “To avoid … burdensome computation” (Wilson, para. 31). Regarding claim 7, the combination of Rong, Zhu, Gu, and Wilson renders obvious wherein the area comprises three-dimensional objects comprising at least one of buildings, vehicles and trees (“office buildings, a large construction site, and many patches of green vegetation,” Rong, pg. 3555, col. 1, para. 1; “The model may further include other natural and man made features, such as buildings,” Wilson, para. 55; see claim 6 for motivation to combine), the at least one 2D land-cover map comprising at least a vision-related land-cover map showing land-cover information for those surfaces of the 3D mesh that are visible from an orthographic view; and/or a ground-related land-cover map showing land-cover information for a ground surface of the 3D mesh, wherein: for generating the vision-related land-cover map the overall probability values of a highest crossing point of each ray are assigned to the respective pixel, and for generating the ground-related land-cover map the overall probability values of a lowest crossing point of each ray is assigned to the respective pixel (“the three-dimensional terrain to determine a set of points, such as a point 108 [of Fig. 2], and then sampling those points onto the nadir view,” Wilson, para. 32; see Wilson, Fig. 2, where data from the 3D terrain mesh is orthographically projected onto a 2D top-down image, and when applied to the combination of Rong, Zhu, and Gu this would teach orthographically projecting the probability values from the 3D terrain mesh of the combination of Rong, Zhu, and Gu; see claim 6 for motivation to combine). Regarding claim 19, the combination of Rong, Zhu, and Gu renders obvious wherein the 2D land-cover map is generated by rasterization of the 3D mesh (e.g. Fig. 1(d) of Rong illustrates a 2D rasterization of the 3D semantic mesh). The combination of Rong, Zhu, and Gu does not specifically recite generating an orthographic view. In the same art of geographical 3D modeling, Wilson teaches generating an orthographic view (“Points are sampled from the projected photographic image at the intersection of the three-dimensional model of terrain and parallel rays extended from a virtual viewport having a nadir perspective. The sampled points are assembled into an orthorectified image,” abstract). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Wilson to the combination of Rong, Zhu, and Gu. The motivation would have been that it “makes it easier to overlay map data” (Wilson, para. 33) and “To avoid … burdensome computation” (Wilson, para. 31). Regarding claim 20, the combination of Rong, Zhu, Gu, and Wilson teaches the claimed a vision-related land-cover map showing land-cover information for those surfaces of the 3D mesh that are visible from an orthographic view of parent claim 7 (see Wilson, Fig. 2), and therefore the limitation of claim 20 is optional due to the “and/or” language of claim 7. See claim 6 for motivation to combine Wilson with the combination of Rong, Zhu, and Gu. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Rong, Zhu, and Gu, and further in view of Tang et al. (CN 114119897 A; hereinafter “Tang”). Regarding claim 10, the combination of Rong, Zhu, and Gu renders obvious wherein the plurality of input images comprise one or more aerial image that are captured by one or more aerial cameras mounted at satellites, airplanes or unmanned aerial vehicles (“aerial images captured by drones,” Rong, pg. 3550, sec. I, para. 1; the orthoimage,” Rong, pg. 3556, sec. C, para. 2). The combination of Rong, Zhu, and Gu does not disclose a plurality of additional input images that are captured by fixedly installed cameras and/or cameras mounted on ground vehicles. In the same art of geographical 3D modeling, Tang teaches wherein the plurality of input images comprise one or more aerial image that are captured by one or more aerial cameras mounted at satellites, airplanes or unmanned aerial vehicles; and a plurality of additional input images that are captured by fixedly installed cameras and/or cameras mounted on ground vehicles (“collecting aerial survey data using drone oblique photography; obtaining camera data of the construction site through a portable camera and/or a ground collection vehicle; building a three-dimensional real-scene model by combining the aerial survey data and the camera data,” abstract). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Tang to the combination of Rong, Zhu, and Gu. The motivation would have been that it “improves the completeness and precision of the three-dimensional real-scene model” (Tang, abstract). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN MCCULLEY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 14, 2023
Application Filed
Apr 09, 2025
Non-Final Rejection — §103
Jul 11, 2025
Response Filed
Sep 18, 2025
Final Rejection — §103
Dec 09, 2025
Request for Continued Examination
Jan 07, 2026
Response after Non-Final Action
Jan 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602859
INFORMATION PROCESSING SYSTEM, RAY TRACE METHOD, AND PROGRAM FOR RADIO WAVE PROPAGATION SIMULATION
2y 5m to grant Granted Apr 14, 2026
Patent 12586290
TEMPORALLY COHERENT VOLUMETRIC VIDEO
2y 5m to grant Granted Mar 24, 2026
Patent 12555335
SYSTEMS AND METHODS FOR ENHANCING AND DEVELOPING ACCIDENT SCENE VISUALIZATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12548241
HIGH-FIDELITY THREE-DIMENSIONAL ASSET ENCODING
2y 5m to grant Granted Feb 10, 2026
Patent 12541904
ELECTRONIC DEVICE, METHOD FOR PROMPTING FUNCTION SETTING OF ELECTRONIC DEVICE, AND METHOD FOR PLAYING PROMPT FILE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+29.7%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 493 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month