Prosecution Insights
Last updated: April 19, 2026
Application No. 18/705,040

METHOD AND SYSTEM FOR DETERMINING A LOCATION OF A VIRTUAL CAMERA IN INDUSTRIAL SIMULATION

Non-Final OA §103
Filed
Apr 26, 2024
Examiner
RENZE, GEORGE NICHOLAS
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Siemens Industry Software Ltd.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
16 granted / 24 resolved
+4.7% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
33 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
2.7%
-37.3% vs TC avg
§103
73.3%
+33.3% vs TC avg
§102
16.0%
-24.0% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: FIG. 3, Reference Element 306 is never mentioned within the specifications. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The disclosure is objected to because of the following informalities: Page 12, Paragraph 59, Line 9, reference element number “206” is used for “fence”, however, this paragraph is describing FIG. 3 and should be reference element number “306” instead, which would also fix the issues with the Drawings (see Drawings above) Page 12, Paragraph 60, Line 13, “the conveyor” is incorrectly referenced as element number 204, however it should be 205 according to FIG. 2. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 19-36 are rejected under 35 U.S.C. 103 as being unpatentable over Richey et al. (Pub. No.: US 2018/0012411 A1), hereinafter Richey, in view of Rowell et al. (Pub. No.: US 2020/0342652 A1), hereinafter Rowell. Regarding claim 19, Richey discloses a method (FIG. 5 and paragraph 61 teach that referring to FIG. 5, a method of collecting background images and reflection maps according to one embodiment is shown.) for determining, by a data processing system (Paragraph 188 teaches that in one embodiment, processing circuitry 102 is arranged to process data, control data access and storage, issue commands, and control other operations implemented by the computer system 100. In more specific examples, the processing circuitry 102 is configured to evaluate training images, test images, and camera images for training or generating estimands for augmented content. Processing circuitry 102 may generate training images including photographs and renders described above.), a location of a virtual camera for virtually capturing an image sequence of a virtual scene of an industrial simulation (Paragraph 165 teaches that the process proceeds to an act A146 where the location and orientation of the virtual camera with respect to the object is stored for subsequent executions of the tracking process.), the method comprises the steps of: (a) receiving inputs on data of the virtual scene containing a set of objects wherein at least two of the objects are in relative motion during a given time interval Ti (FIG. 4 and paragraph 59 teach that referring to FIG. 4, one embodiment of a deep neural network which performs both classification of whether a real world object is present and calculation of AR data, such as estimands for position, rotation, lighting type, lighting position/direction and object state based on a GoogLeNet network is shown. The illustrated network outputs the following estimated values: position, rotation, the lighting position, lighting type, the state of the object and whether it is present in the input image. Also, paragraph 139 teaches that if a digital model is not available, and it is not feasible to compute the pose of an object in photographs, then the photographs of the object may be combined using photogrammetry/structure from motion (SfM) to create a digital model. Once a digital model is constructed, the material properties may be described so that the renders can model the physical properties of the object.); (b) receiving inputs on data of at least two of the objects of the set of objects, wherein the at least two objects are in relative motion in the given time interval Ti (Paragraph 205 teaches that with pose-based AR, it may be more efficient to separate the detection and tracking process when analyzing an image sequence. The detection phase may include computing the localization on the entire camera image. Once the object is detected, it may be more efficient to look for the object in a restricted area of the image where it was last found. This assumes the object motion is small between successive video frames. Even when the assumption is broken, the detection phase may rediscover the object if it is still visible. Instead of doing a virtual camera transform to zoom into the image, a region in the camera image may be cropped during detection. If it is not found in the tracking step, then the detection phase restarts by scanning the entire image frame in one embodiment.) and are to be visible in a captured image sequence of the virtual scene in at least two time points, wherein the objects being hereinafter called focus objects (Paragraph 41 teaches that tracking an object is estimating its location in a sequence of images. The network performs a regression estimate of the values of pose, lighting environment, and physical state of an object in one embodiment. Regression maps one set of continuous inputs (x) to another set of continuous outputs (y). A neural network may additionally perform binary classification to estimate if the object is visible in the image so that the other estimates are not acted upon when the object is not present since the network will always output some value for each output. For brevity, we collectively refer to the network's estimate of pose, physical state, lighting environment, and presence as the estimands. Depending on the application, the estimands may be all of these outputs or a subset of them. In some embodiments, the network is not trying to classify the pose from a finite set of possible poses, instead it estimates a continuous pose given an image of a real world object in the real world in some embodiments. In some embodiments, training of the network may be accomplished by either providing computer generated images (i.e. renders) or photographs of the object to the neural network. The real world object may be of any size, even as large as a landscape. Also, the real world object may be entirely seen from within the inside where the real world object surrounds the camera in the application.); (c) receiving inputs on data of a set of camera locations candidates for capturing the image sequence (Paragraph 51 teaches that at acts A10 and A12, a plurality of background images and a plurality of reflection maps are accessed by the computer system. For objects that can be seen in multiple locations and potentially multiple environments it is desired in some embodiments that the network learn to ignore the information surrounding the object. One example of a real world object where the surroundings could change would be a tank. The tank could be seen in many types of locations, in a desert, in a city, or within a museum. An example of where an environment might change would be the Statue Of Liberty. The statue is always there but the surrounding sky may appear different, and buildings in the background can change. To train the network to ignore the backgrounds in these situations, a large collection of images (e.g., 25,000 or more) and environment maps (e.g., 10 more or less) may be used in one embodiment. Additional details regarding acts A10 and A12 are discussed below with respect to FIG. 5.); (d) for each camera location candidate, generating a map of pixels indicating a presence of the at least two focus objects and their visibility level in a corresponding capturable image sequence, the map hereinafter being called a visibility map (Paragraph 45 teaches that in one embodiment, the classification and augmented content neural networks each include an input layer, one or more hidden layers, and an output layer of neurons. The input layer maps to the pixels of an input camera image of the real world. If the image is a grayscale image, then the intensities of the pixels are mapped to the input neurons. If the image is a color image, then each color channel may be mapped to a set of input neurons. If the image also contains depth pixels (e.g. RGB-D image) then all four channels may also be mapped to a set of input neurons. The hidden layers may consist of neurons that form various structures and operations that include but are not limited to those mentioned above. Parts of the connections may form cycles in some applications and these networks are referred to as recurrent neural networks. Recurrent neural networks may provide additional assistance in tracking objects since they can remember state from previous video frames. The output layer may describe some combination of augmented reality estimands: the object pose, physical state, environment lighting, the binomial classification of the presence of the object in the image, or even additional estimands that may be desired. Additionally, paragraph 150 teaches that in one embodiment, the mapping to remove distortions may be pre-computed for a grid of points covering the image. The points map image pixels to where they should appear after the distortions are removed. This may be efficiently implemented on a GPU with a mesh model where vertices are positions by the grid of points. The UV coordinates of the mesh then map the pixels from the input image to the undistorted image coordinates. This process may be performed on every frame before it is sent to the neural network for processing in one embodiment. Hereafter, we assume the processing will be performed on the undistorted camera image according to some embodiments and it may be referred to as simply the camera image.); and (e) from a generated set of visibility maps, selecting a camera location corresponding to the visibility map (FIG. 10 and paragraphs 163-165 teach that referring again to FIG. 10, the zoomed image, which is a higher resolution image of the object compared with the object in the camera image, is evaluated using a neural network to generate a plurality of estimands for one or more of object pose, lighting pose, object presence and object state which are useable to generate augmented content regarding the object according to one embodiment. The zoomed image is evaluated by the network using a feed forward process through the network to generate the estimands at an act A142. The use of the higher resolution image of the object provides an improved estimate of the estimands compared with use of the camera image. At an act A144, it is determined whether the object has been located within the zoomed image. For example, the uncertainty estimate discussed with respect to act A136 may be utilized determine whether the object is found in one embodiment. If the object has not been found, the process returns to act A130. If the object has been found, the process proceeds to an act A146 where the location and orientation of the virtual camera with respect to the object is stored for subsequent executions of the tracking process.). However, Richey fails to disclose for which a desired visibility level of the at least two focused objects is reached or iteratively proceeding by adjusting at least one of the camera location candidates and by iteratively executing steps (d) – (e). Rowell discloses for which a desired visibility level of the at least two focused objects is reached or iteratively proceeding by adjusting at least one of the camera location candidates and by iteratively executing steps (d) – (e) (FIG. 5 and paragraph 140 teach that as shown in FIG. 5, in one embodiment, a camera setting file 500 includes camera position data 510 describing the location of a virtual camera within a scene, camera intrinsic 520 parameters describing internal properties of a camera device, camera calibration metadata 530 including one or more parameters for calibrating one or more virtual camera modules, camera capture settings 540 for modifying a view perspective captured in a synthetic image, and image augmentation settings 550 for augmenting one or more synthetic images. Paragraph 142 teaches that camera intrinsic 520 parameters describe the internal properties of a camera device. ... To rapidly iterate between simulating capture performance of different actual camera devices, the camera intrinsic parameters 520 included in a camera settings file 500 may be changed in real time. FIG. 6 and paragraph 147 teach that FIG. 6 describes camera calibration metadata 530 included in a camera file 500 for a virtual camera device. ... In scenes including two or more virtual cameras arranged in a stereoscopic arrangement, camera calibration metadata 530 describes a calibration position for each virtual camera plus a calibration position for a stereoscopic dual camera device including two or more virtual cameras. Lastly, paragraph 132 teaches that the training data assembly service 103 compiles a smaller and/or distinct training dataset including synthetic images, augmented images, additional image data files, scene metadata, and/or image data metadata having different characteristics. The machine learning service 105 then provides to new dataset in real time to a machine learning system and receives feedback on training using the new dataset. If the feedback is negative, the machine learning service 105 will fetch a new dataset having different characteristics from the training data assembly service 103 until providing an effective dataset iteration to the machine learning system.). Since Richey teaches the initial method steps for receiving inputs of data involving multiple objects that are in the view of a certain camera location, is able to map the objects to certain image pixel points and can store different camera viewpoints for usage in potentially adjusting the camera location/viewpoint to a desired location based off of the mapped pixel points and Rowell teaches method steps for generating image data of multiple objects, with the capabilities to make camera location adjustments that can be updated through multiple repeated iterations until the training dataset matches the desired object characteristics/viewpoint, it would have been obvious to person having ordinary skill in the art to combine the functions together, so that the camera location could be updated consistently, over numerous iterations, until the mapped pixels of the focused objects matched the desired camara location. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified Richey to incorporate the functions of Rowell so that multiple iterations of camera location updates could be utilized to help improve the overall focus and clarity of the focused objects and provide the user with an ideal viewpoint of those objects. Regarding claim 20, Richey in view of Rowell disclose everything claimed as applied above (see claim 19), in addition, Richey in view of Rowell disclose wherein the visibility map is generated by superimposing at least two images captured at at least two time points in the given time interval Ti and by indicating in each map pixel if a portion of a focus object is present and, if yes, if a present focus object portion is occluded (Paragraph 56 of Richey teaches that in one embodiment, an image of the training or test set is generated by compositing one of the foreground images with a random one of the background images where the object of interest is superimposed upon one of the background images. Additionally, paragraphs 172-173 of Richey teach that there may be different detection and tracking strategies depending on the goals of the application. In one application, only recognizing/detecting and tracking of a single object is used. Other applications may track multiple objects one at a time (e.g., in a sequence) or track multiple objects simultaneously in the same images. For example, if one of a plurality of objects is detected and tracked at a time in a sequence, the computer system may run a classifier network to identify the objects present in the camera image.). Regarding claim 21, Richey in view of Rowell disclose everything claimed as applied above (see claim 19), in addition, Richey in view of Rowell disclose wherein the visibility level of the visibility map is computable via a set of visibility rating parameters computable from the visibility map (Paragraphs 72-73 of Richey teach that at act A34, it is determined if the object would be visible in an image as result of the selections of acts A30 and A32. If not, the process returns to act A30. If the object would be visible, the process proceeds to an act A36 where one of a plurality of states in which the object to be depicted is selected.); the visibility rating parameters are selected from the group consisting of: parameters for rating an occlusion amount of the at least two focus objects (Paragraph 84 of Rowell teaches that additional image data channels generated by the AIDRU 106 include optical flow information, noise data, and occlusion maps. Additionally, paragraph 87 teaches that noise data and occlusion maps may also be generated by the AIDRU 106. Noise data includes noise output including occluded areas, distorted objects, and image data dropout zones having one or more image degradation effects; optical flow data including a directional output and/or speed output; and image segmentation data including object segmentation maps, noise vs. signal segmentation maps, lens effect segmentation maps, and noise effect segmentation maps.); parameters for rating a distance between at least two of the focus objects (Paragraph 68 of Richey teaches that before training of the network is started, the user sets the viewing and environmental parameters for which the network is expected to work. These parameters can be positional values like how close or far the object can be from the camera and orientation values of the object, i.e. the range of roll, pitch, and yaw an object can experience.); parameters for rating a relative size of the at least two focus objects (Paragraph 158 of Richey teaches that following the location of object, the camera can be effectively zoomed into the region of interest that contains the object. The object may be cropped from the larger image by determining the size and center of the object as it appears in the image in one embodiment.); and parameters for rating 2D motion direction of the at least two focus objects (Paragraph 84 of Rowell teaches that additional image data channels generated by the AIDRU 106 include optical flow information, noise data, and occlusion maps. ... In some embodiments, the AIDRU 106 generates object velocity data by determining displacement of an object and relating displacement to time. An object's displacement may be determined by calculating the location of a point included in an object in a first image and comparing the location of the same point in a second image or video frame captured before or after the first image. The AIDRU 106 may then determine the difference in capture time of the two sequential images or frames and calculate velocity by obtaining the quotient of the point's displacement and the difference in image capture time.). Regarding claim 22, Richey in view of Rowell disclose everything claimed as applied above (see claim 21), in addition, Richey in view of Rowell disclose wherein the visibility map is selected via a multiple criteria decision making algorithm on the set of visibility rating parameters computed for the set of visibility maps (Paragraph 69 of Richey teaches that since camera orientation is relative to an object's frame of reference, some of these values are correlated to the viewing parameters. If training images are being created by rendering for example as discussed below, values within these given ranges may be selected. In some embodiments, the values are randomly selected to prevent unwanted biases in the training set which could occur from sampling values on a grid. Additionally, paragraph 113 teaches that for each training image, a random camera or object pose, reflection map, lighting environment, physical state of the object and background image are selected and then used to render the object as an image while recording the corresponding estimands for the image.). Regarding claim 23, Richey in view of Rowell disclose everything claimed as applied above (see claim 19), in addition, Richey in view of Rowell disclose wherein the visibility map is selected by applying a selector module previously trained with a machine learning algorithm (Paragraph 60 of Richey teaches that another embodiment of a neural network designed to assist in finding the pose of an object is a network that was previously trained to find keypoints on an object. Using a neural network, the location of the keypoints on an object can be found in image space as discussed in Pavlakos, Georgios, Xiaowei Zhou, Aaron Chan, Konstantinos G. Derpanis, and Kostas Daniilidis, “6-DoF Object Pose from Semantic Keypoints,” 2017, and http://arxiv.org/abs/1703.04670, the teachings of which are incorporated herein by reference. Additionally, paragraph 114 of Richey teaches that with an unlimited number of possible training images, it is feasible to train an entire deep neural network from scratch. It is also possible to retrain an existing network for different objects, for example, using transfer learning. It may be the case that a network has been trained on one object, then a new network is retained for another object with fewer training images. Also, paragraph 121 of Rowell teaches that the training data pipeline 120 organizes synthetic images and additional image data channels provided by the synthetic image generation system 100, complies specific selections of synthetic images and additional image data into training datasets generated for specific CV applications, and provides the training data to machine learning systems for further processing.). Regarding claim 24, Richey in view of Rowell disclose everything claimed as applied above (see claim 19), in addition, Richey in view of Rowell disclose wherein any of the inputs received at item (a), (b), or (c) is: automatically determined (Paragraph 120 of Rowell teaches that indices generated by the image indexing module 108 are used by the training data assembly service 103 during image dataset compilation. Indexing and/or search functions performed by the image indexing module 108 may be performed automatically on a periodic, recurring, and/or scheduled basis as chronological jobs.); manually inputted by a user (Paragraph 51 of Rowell teaches that other embodiments of the user interface 109 include a digital display screen with physical buttons for inputting control commands.); automatically extracted from manufacturing process data of the industrial simulation (Paragraph 132 of Rowell teaches that in other embodiments, the machine learning service 105 indicates a training dataset having particular characteristics failed to produce accurate machine learning models within a specific time period and/or a machine learning system is unable to process the training dataset in a reasonable time using available computational resources. In response, the training data assembly service 103 compiles a smaller and/or distinct training dataset including synthetic images, augmented images, additional image data files, scene metadata, and/or image data metadata having different characteristics. The machine learning service 105 then provides to new dataset in real time to a machine learning system and receives feedback on training using the new dataset.); and a combination of above (Paragraph 50 of Rowell teaches that the user interface 109 provides a platform for communicating with multiple components of the synthetic image generation system 101 to control the generation and use of synthetic image and additional image data files. In some examples the user interface 109 is a digital interface comprising digital buttons for selecting and inputting control commands to synthetic image generation system components. The user interface 109 may display current settings for a selected component or show a preview of a synthetic image or additional image data channel that will be generated using the current settings. The user interface 109 may support real time changes to one or more synthetic image generation system settings.). Regarding claim 25, the system steps correspond to and are rejected similarly to the method steps of claim 19 (see claim 19 above). In addition, Richey in view of Rowell disclose a data processing system (FIG. 13 and paragraph 187 of Richey teaches that referring to FIG. 13, one example embodiment of a computer system 100 is shown. The display device 10 and/or server device 100 may be implemented using the hardware of the illustrated computer system 100 in example embodiments.), comprising: a processor (Paragraph 187 of Richey teaches that the depicted computer system 100 includes processing circuitry 102, storage circuitry 104, a display 106 and communication circuitry 108.); and an accessible memory (Paragraph 191 of Richey teaches that some more specific examples of computer-readable storage media include, but are not limited to, a portable magnetic computer diskette, such as a floppy diskette, a zip disk, a hard drive, random access memory, read only memory, flash memory, cache memory, and/or other configurations capable of storing programming, data, or other digital information.). Regarding claim 26, the system steps correspond to and are rejected similarly to the method steps of claim 20 (see claim 20 above). Regarding claim 27, the system steps correspond to and are rejected similarly to the method steps of claim 21 (see claim 21 above). Regarding claim 28, the system steps correspond to and are rejected similarly to the method steps of claim 22 (see claim 22 above). Regarding claim 29, the system steps correspond to and are rejected similarly to the method steps of claim 23 (see claim 23 above). Regarding claim 30, the system steps correspond to and are rejected similarly to the method steps of claim 24 (see claim 24 above). Regarding claim 31, the non-transitory computer-readable medium steps correspond to and are rejected similarly to the method steps of claim 19 (see claim 19 above). In addition, Richey in view of Rowell disclose a non-transitory computer-readable medium comprising executable instructions (Paragraphs 190-191 of Richey teach that at least some embodiments or aspects described herein may be implemented using programming stored within one or more computer-readable storage medium of storage circuitry 104 and configured to control appropriate processing circuitry 102. ... For example, exemplary computer-readable storage media may be non-transitory and include any one of physical media such as electronic, magnetic, optical, electromagnetic, infrared or semiconductor media.). Regarding claim 32, the non-transitory computer-readable medium steps correspond to and are rejected similarly to the method steps of claim 20 (see claim 20 above). Regarding claim 33, the non-transitory computer-readable medium steps correspond to and are rejected similarly to the method steps of claim 21 (see claim 21 above). Regarding claim 34, the non-transitory computer-readable medium steps correspond to and are rejected similarly to the method steps of claim 22 (see claim 22 above). Regarding claim 35, the non-transitory computer-readable medium steps correspond to and are rejected similarly to the method steps of claim 23 (see claim 23 above). Regarding claim 36, the non-transitory computer-readable medium steps correspond to and are rejected similarly to the method steps of claim 24 (see claim 24 above). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ziegler et al. (Pub. No.: US 2021/0082185 A1) teaches an apparatus, method and computer program for rendering a visual scene comprised of multiple objects and target viewpoints/positions. Brodsky et al. (Pub. No.: US 2020/0111255 A1) teaches a method and system for rendering specific virtual content in any location. Frommhold et al. (U.S. Patent: #10,872,463 B2) teaches a method for comprising depth information for a three-dimensional scene. Any inquiry concerning this communication or earlier communications from the examiner should be directed to George Renze whose telephone number is (703)756-5811. The examiner can normally be reached Monday-Friday 9:00am - 6:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /G.R./Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602407
SYSTEMS AND METHODS FOR GENERATING A UNIQUE IDENTITY FOR A GEOSPATIAL OBJECT CODE BY PROCESSING GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12573147
LANDMARK DATA COLLECTION METHOD AND LANDMARK BUILDING MODELING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12555315
HEURISTIC-BASED VARIABLE RATE SHADING FOR MOBILE GAMES
2y 5m to grant Granted Feb 17, 2026
Patent 12530759
System and Method for Point Cloud Generation
2y 5m to grant Granted Jan 20, 2026
Patent 12505508
DIGITAL IMAGE RADIAL PATTERN DECODING SYSTEM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month