Prosecution Insights
Last updated: April 19, 2026
Application No. 18/790,413

FILTERING THREE-DIMENSIONAL SHAPE DATA FOR TRAINING TEXT TO 3D GENERATIVE AI SYSTEMS AND APPLICATIONS

Non-Final OA §103
Filed
Jul 31, 2024
Examiner
HA, ALICIA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
12 currently pending
Career history
12
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
67.9%
+27.9% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: In paragraph [0004], line 5, “and/or their components),.” should be “and/or their components).”. In paragraph [0063], line 7, “a subset shapes” should be “a subset of shapes”. Paragraphs [00100] through [00254] should be numbered as [0100] through [0254]. Appropriate correction is required. Claim Objections Claim 1 is objected to because of the following informalities: In line 2, “a set shapes” should be “a set of shapes”. In line 4, “the viewpoints” does not directly map with “one or more viewpoints” in line 3 and may be indefinite. Therefore, if “the viewpoints” in line 4 directly corresponds to “one or more viewpoints” in line 3, “the viewpoints” should be changed to “the one or more viewpoints”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6, and 9-15, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Xiong et al. (US 2023/0196832, hereinafter Xiong), in view of Collins (US 2020/0005083), and further in view of Ramanana Rahary et al. (US 2026/0010970, hereinafter Ramanana). Regarding claim 1, Xiong teaches a method comprising: ([0010] “The computer-implemented method”) generating embeddings of the set of shapes as depicted in the set of images; ([0009] “determining, by at least one image group classifier, a first set of embeddings quantifying a plurality of features in a set of image data objects; processing, by the at least one image group classifier, the first set of embeddings for each image data object in the set of image data objects through a set of machine learning parameters”) determining, based at least on one or more machine learning models processing the embeddings, scores corresponding to one or more quality metrics associated with the set of images depicting the set of shapes; ([0010] “determining, based on the first set of embeddings, at least one image quality metric, determining, based on the first set of embeddings, at least one image quality metric; determining, based on the set of machine learning parameters and the first set of embeddings for that image data object”) determining a subset of images from the set of images based at least on the scores; ([0073] “In some embodiments, the full group set of images may be filtered to exclude outlier images that may not meet an image quality (size, resolution, light/color range/average, etc.) threshold… for being ranked”, where “ranking engine may be further configured to determine the ranked list of the subset of image data objects based on, for each image data object in the subset of image data objects: the at least one image quality metric” [0073], and “Group selector 556 may be configured to select a set of image data objects for a selected group” [0073]) and outputting second data representative of the subset of images ([0007] “return a ranked list of the subset of image data objects, where the image manager is further configured to display, based on the ranked list, at least one image data object from the subset of image data objects on a graphical user interface”). Xiong fails to teach rendering, based at least on first data representative of a set shapes in a 3D scene, a set of images depicting the set of shapes from one or more viewpoints in the 3D scene. However, it is known in the art as taught by Collins. Collins teaches a method comprising: ([0007] “a computer-implemented method for generating a dataset having a multiplicity of corresponding images for machine vision learning”) rendering, based at least on first data representative of a set shapes in a 3D scene, ([0029] “A seed model 110 is selected for generating a corresponding multiplicity of 2D images corresponding to an object using a 3D rendering engine 120”, where “The seed model 110 may be any convenient data representation of the object to be recognized” [0044] and “the machine vision learning system may be provided with similarly created datasets for different objects” [0078]) a set of images depicting the set of shapes from one or more viewpoints in the 3D scene; ([0055] “The 3D rendering engine 120 renders a first view 160 of the seed model 110 with the first lighting arrangement 150—this first scene is depicted in FIG. 3a. At least one 2D image 130 is stored of this scene”, where “This is repeated for a multiplicity of scenes by storing 2D images of a plurality of views 160 with a plurality of lighting arrangements 150. This generates a dataset 100 comprising a multiplicity of 2D images 130 corresponding to the seed model 110.” [0058]). Collins is analogous to the claimed invention, as both relate to generating training datasets for machine learning models. Collins further teaches an invention “to provide an improved method for generating a dataset of corresponding images which is suitable for machine vision learning” [0006]. Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Collins to Xiong in order to generate datasets suitable for machine learning models. The combination of Xiong and Collins fails to teach generating embeddings associated with the viewpoints of the set of shapes as depicted in the set of images. However, this is known in the art as taught by Ramanana. Ramanana teaches generating embeddings associated with the viewpoints of the set of shapes as depicted in the set of images; ([0099] “The generating method first generates a batch of renderings of a given scene from different viewpoints. The generating method then computes the semantic latent embeddings η of the generated images using τ”, where “For each camera viewpoint, the generating method comprises computing a barycentric combination of the semantic latent embeddings η that is weighted by the similarity scores” [0099]) Ramanana is analogous to the claimed invention, as both relate to generating images using a machine learning model. Ramanana further teaches that their invention “an improved solution for generating a plurality 2D images of a 3D scene” [0006] by addressing the problem where “current solutions do not allow generating a plurality of such realistic 2D images of a given 3D scene that are visually and functionally consistent across different viewpoints” [0005]. Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Ramanana to the combination of Xiong and Collins to generate more realistic images for machine learning models. Regarding claim 2, the combination of Xiong, Collins, and Ramanana teaches the method of claim 1, wherein the rendering the set of shapes comprises rendering, based at least on the first data, the set of images depicting the set of shapes using at least: first viewpoints that represent the set of shapes from one or more first angles at one or more first positions in the 3D scene with respect to the set of shapes; (Ramanana; [0007] “The method comprises generating a plurality of first 2D images of the 3D scene each having a respective viewpoint by, for each first 2D image, applying the model to the respective viewpoint of the first 2D image”, where “The determining of each viewpoint may comprise setting parameters of a camera from which the first 2D image is generated. These parameters may include a camera position, a field of view and/or a pitch.” [0045]) and second viewpoints that represent the set of shapes from one or more second angles at one or more second positions in the 3D scene with respect to the set of shapes (Ramanana; [0007] “The method comprises generating a plurality of second 2D images of the 3D scene by applying, for each given viewpoint of the first 2D images, the model to the given viewpoint”). Similarly to claim 1, Ramanana teaches an improved solution for generating images of a 3D scene that account for and are consistent from various angles and positions ([0005] “current solutions do not allow generating a plurality of such realistic 2D images of a given 3D scene that are visually and functionally consistent across different viewpoints”). Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to further incorporate the teachings of Ramanana to the combination of Xiong, Collins, and Ramanana to render images in 3D scenes with consistent viewpoints between them. Regarding claim 3, the combination of Xiong, Collins, and Ramanana teaches the method of claim 2, wherein: the generating the embeddings associated with the viewpoints of the set of shapes comprises: generating first embeddings associated with the first viewpoints; generating second embeddings associated with the second viewpoints; (Ramanana; [0099] “For each camera viewpoint, the generating method comprises computing a barycentric combination of the semantic latent embeddings η that is weighted by the similarity scores between the considered viewpoint and the other ones.”) and generating third embeddings based at least on the first embeddings and the second embeddings; (Ramanana; [0099] “a new batch of renderings is generated for the different viewpoints, the generation of each viewpoint being conditioned on its respective barycentric latent embedding”, where “For each camera viewpoint, the generating method comprises computing a barycentric combination of the semantic latent embeddings η that is weighted by the similarity scores between the considered viewpoint and the other ones” [0099], and “this may be applied several times” [0100]. Note: the semantic embeddings created from the new renders based on the previous barycentric embeddings are the third embeddings.) and the determining the scores corresponding to the one or more quality metrics associated with the images depicting the set of shapes is based at least on the one or more machine learning models processing the third embeddings (Ramanana; [0100] “the similarity between barycentric embeddings of a subsequent generation stage and semantic embeddings of the current generated images for the different viewpoints hits a threshold.” Note: The score corresponding to one or more quality metrics is mapped to similarity score, and the third embeddings are mapped to the “semantic embeddings of the current generated images” [0100], as the first and second embeddings correspond to “barycentric embeddings of a subsequent generation stage” [0100], where embeddings are created for each camera viewpoint [0099]). Ramanana is analogous to the claimed invention, as both relate to generating images using a machine learning model. Ramanana further teaches that the “Key advantages of the machine-learning method and the generating method include… Generate consistent views at low cost: the generating method has the crucial advantage of being efficient in terms of computation and implementation.” [0101-0102]. Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to further incorporate the teachings of Ramanana to the combination of Xiong, Collins, and Ramanana to generate multiple viewpoints with low cost and efficient computation. Regarding claim 4, the combination of Xiong, Collins, and Ramanana teaches the method of claim 1, wherein the determining the subset of images comprises: determining, from the set of images depicting the set of shapes, one or more images depicting one or more shapes that are associated with one or more scores from the scores that satisfy a threshold score; (Xiong; [0074] “group filter 560.1 may use a set of image and object quality filtering criteria similar to those that could be employed by group selector 556”, where “the full group set of images may be filtered to exclude outlier images that may not meet an image quality (size, resolution, light/color range/average, etc.) threshold” [0073]. Note: this is interpreted where if the set of images meets the image quality threshold, then it would not be filtered, therefore, determining a subset of images that meet a threshold). and determining the subset of images to include at least the one or more images (Xiong; [0074] “In some embodiments, the group set of images may be further filtered using group filter 560.1 to select a best subset of the group images.”) Regarding claim 5, the combination of Xiong, Collins, and Ramanana teaches the method of claim 4, further comprising: determining a scoring distribution associated with the scores; (Xiong; [0055] “In some embodiments, image ranking may be performed based on a limited number of metrics, such as the image grouping confidence score, or graph-based relationship ranking… to the embeddings generated”) and determining the threshold score based at least on the scoring distribution (Xiong; [0049] “One or more group parameters may be used to rank the groups, such that higher values (e.g., counts or relationship ranking values) may place the initial groups in an order and a number of groups may be selected (e.g., top 3 groups, top 10 groups, top 5 groups in different categories, etc.)”). Regarding claim 6, the combination of Xiong, Collins, and Ramanana teaches the method of claim 1, wherein the determining the subset of images comprises: determining, based at least on the scores, an initial subset of images by removing one or more first images from the set of images; (Xiong; [0044] “Data processing may include various filtering steps… to reduce the number of images to be processed at each stage and improve image quality and/or other image characteristics to improve performance of the various machine learning algorithms and resulting models”, where “the full group set of images may be filtered to exclude outlier images that may not meet an image quality (size, resolution, light/color range/average, etc.) threshold” [0073]) and determining, based at least on one or more rules indicating one or more second quality metrics, the subset of images by removing one or more second images from the initial subset of images ([0074] “In some embodiments, the group set of images may be further filtered using group filter 560.1 to select a best subset of the group images. For example, group filter 560.1 may use a set of image and object quality filtering criteria similar to those that could be employed by group selector 556, but may include different criteria and thresholds, such as selecting for only full-frontal views with no occlusions”) Regarding claim 9, the combination of Xiong, Collins, and Ramanana teaches the method of claim 1, further comprising: selecting, based at least on the scores, one or more images from the set of images; (Xiong; [0056] “the set of images assigned to the group may be filtered to determine the best representatives of the group using image metrics.”) determining one or more updated scores associated with the one or more images; (Xiong; [0056] “In some configurations, multiple ranking methodologies or ranking factors may be related through a combined ranking algorithm, such as a weighted sum or average of metric-based factors… image quality factors, and relationship rankings.”) and training the one or more machine learning models using at least the one or more images and the one or more updated scores ([0056] “At block 276, the embedding model may be trained using the good example subset of images determined at block 274”, where “A combined ranking score may be used to order the images in the group from highest rank to lowest rank” [0056]). Claim 10 has substantially similar limitations to claim 1, but in a system form. The combination of Xiong, Collins, and Ramanana further teaches a system comprising: one or more processors to: (Xiong; [0007] “One general aspect includes a system that includes: at least one processor”). Regarding claim 11, claim 11 has substantially similar limitation to claim 2, therefore, will be rejected under the same rationale as claim 2. Regarding claim 12, claim 12 has substantially similar limitations to claim 1, therefore, will be rejected under the same rationale as claim 1. Regarding claim 13, claim 13 has substantially similar limitation to claim 3, where “first portion of the one or more viewpoints” corresponds to “first viewpoints”, and “second portion of the one or more viewpoints” corresponds to “second viewpoints”. Therefore, claim 13 will be rejected under the same rationale as claim 3. Regarding claim 14, claim 14 has substantially similar limitations to claim 4, therefore, will be rejected under the same rationale as claim 4. Regarding claim 15, claim 15 has substantially similar limitations to claim 6, therefore, will be rejected under the same rationale as claim 6. Regarding claim 17, claim 17 has substantially similar limitations to claim 9, therefore, will be rejected under the same rationale as claim 9. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Xiong (US 2023/0196832), in view of Collins (US 2020/0005083) and Ramanana et al. (US 2026/0010970), and further in view of Globose Technology Solution (“Optimizing Image Data for Advanced Machine Learning Algorithms”, hereinafter Globose). The combination of Xiong, Collins, and Ramanana teaches the method of claim 6, however, fails to teach wherein the one or more rules indicating the one or more second quality metrics include at least one of: a first rule to remove corrupted images; a second rule to remove images associated with large scenes; a third rule to remove images associated with scenes that depict multiple shapes; a fourth rule to remove images that depict ground planes; or a fifth rule to remove images that depict backdrops. However, this is known in the art as taught by Globose. Globose teaches wherein the one or more rules indicating the one or more second quality metrics include at least one of: a first rule to remove corrupted images ([par. 3, lines 1-5] “Once collected, the image data must be preprocessed and cleaned… Cleaning also includes removing corrupt or irrelevant images and dealing with missing or incomplete data.”). Globose is analogous to the claimed invention, as both relate to creating datasets for machine learning models. Globose further teaches that removing corrupted images “is vital for removing any factors that could potentially mislead the training of the machine learning model” [par. 3, lines 7-8]. Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Globose to the combination of Xiong, Collins, and Ramanana in order to create a cleaner image dataset to train a machine learning model without any misleading factors that would impact its effectiveness. Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Xiong (US 2023/0196832), in view of Collins (US 2020/0005083) and Ramanana et al. (US 2026/0010970), and further in view of Vu et al. (US 2025/0232129, hereinafter Vu) and Liu et al. (US 2024/0296936, hereinafter Liu). In regards to claim 8, the combination of Xiong, Collins, and Ramanana teaches the method of claim 1, but fails to teach further comprising: obtaining training data representative of a training set of images and ground truth data representative of ground truth scores associated with the training set of images; generating second embeddings associated with the training set of images; determining, based at least on the one or more machine learning models processing the second embeddings, second scores associated with the training set of images; and updating one or more parameters associated with the one or more machine learning models based at least on the second scores and the ground truth scores. However, this is known in the art by Vu. Vu teaches a method ([0022] “Embodiments of the present invention disclose a second computer-implemented method for tuning large language models.”) further comprising: obtaining training data representative of a training set and ground truth data representative of ground truth scores; ([0022] “The second computer-implemented method includes receiving pairs of textual prompts and ground truth labels”, where “to select pairs of input prompts and ground truth labels; the reward functions capture application-specific measurements (e.g., cosine similarity in a transformed space, semantic similarity, domain knowledge based similarity, etc.)” [0027]) generating second embeddings associated with the training set; determining, based at least on the one or more machine learning models processing the second embeddings, second scores associated with the training set; ([0023] “transforming a respective one of the textual prompts to generate an embedding vector of the respective one of the textual prompts; transforming a respective one of the ground truth labels to generate an embedding vector of the respective one of the ground truth labels; and computing a similarity score between the respective one of the textual prompts and the respective one of the ground truth labels”, where “Embodiments of the present invention disclose a second computer-implemented method for tuning large language models” [0022]) and updating based at least on the second scores and the ground truth scores ([0022] “tuning the large language model using the training dataset and reinforcement learning with the one or more predefined reward functions or the one or more reward functions”, where “creating one or more reward functions measuring the similarity between the textual outputs and the ground truth labels” [0022] and “the computer system or server returns the similarity scores as scalar reward values. The scalar reward values are used by a reinforcement learning agent in tuning the large language model.” []). Vu is analogous to the claimed invention, as both relate to creating datasets in order machine learning models. Vu further teaches that “In existing techniques of tuning a large language model, a training dataset is manually created and human preferences are specified… Embodiments of the present invention disclose a system and method of selecting pairs of input prompts and ground truth labels for training or tuning a large language model.” [0026]. Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Vu to the combination of Xiong, Collins, and Ramanana to use ground truth in order to tune large language models without the need of human input. Vu does teach a training set, but not specifically a training set of images. However, this is known in the art as taught by Liu. Liu teaches obtaining training data representative of a training set of images and ground truth data representative of ground truth scores associated with the training set of images; ([0005] “The machine learning based model is trained by receiving one or more training medical images and ground truth labels identifying one or more anatomical objects in the one or more training medical images”). Liu is analogous to the claimed invention, as both relate to machine learning models using labeled datasets to train. Liu further teaches that their invention aims to improve “conventional approaches [that] have slower inference speeds and higher storage requirements” [0004]. Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Liu to the combination of Xiong, Collins, Ramanana, and Vu to use a training set of images with its ground truth in order to better train machine learning models such that it is able to infer faster with less storage. The combination of Xiong, Collins, Ramanana, and Vu fails to explicitly teach updating one or more parameters associated with the one or more machine learning models, however, this is known in the art by Liu. Liu teaches that it is known in the art of training machine learning models to update its parameters in order to train them ([0100] “In general, parameters of a machine learning model can be adapted by means of training. In particular… unsupervised training, reinforcement learning”). Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Liu to the combination of Xiong, Collins, Ramanana, and Vu, as it is known in the art of machine learning models to update its parameters for training purposes. Regarding claim 16, claim 16 has substantially similar limitations to claim 8, therefore, will be rejected under the same rationale as claim 8. Claims 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Xiong (US 2023/0196832), in view of Collins (US 2020/0005083) and Ramanana et al. (US 2026/0010970), and further in view of Vu et al. (US 2025/0232129). Regarding claim 18, the combination of Xiong, Collins, Ramanana, and Vu teaches the system of claim 10, wherein the system is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; (Xiong; [0082] “Group selection engine 586 may include group selection logic, such as an automated rules engine, for processing initial groups 520.17 and returning a selected subset of groups (e.g., selected groups 520.19) to use for generating group classifiers 550.”) a perception system for an autonomous or semi-autonomous machine; (Xiong; [0049] “In some embodiments, group analysis and selection may be executed automatically based on a set of group selection rules, with or without manual intervention or validation by a user.”) a system for performing one or more simulation operations; (Ramanana; [0058] “The real-life room design process allows creating richer, more pleasant environments (for animation, advertising and/or for generating virtual environments, e.g., for simulation)”) a system for performing one or more digital twin operations; (Collins; [0045] “The seed model 110 is loaded into a 3D rendering engine 120, such as POV-Ray or Blender, and rendered under different conditions—preferably, these are selected based upon the real-world recognition conditions.”) a system for performing light transport simulation; (Ramanana; [0037] “the model is able to generate 2D images that take into account the perspective of the 3D scene and its lighting, as well as occlusions between objects… lightning from a window”) a system for performing collaborative content creation for 3D assets; (Ramanana; [0058] “The real-life room design process allows creating richer, more pleasant environments (for animation, advertising and/or for generating virtual environments, e.g., for simulation)”, where “The generating method may be included in a real-life room design (i.e., effective arrangement) process” and “the real-life room design process may comprise populating the 3D scene (which may be initially, e.g., partially, empty) representing a room with one or more new objects by modifying the layout of the 3D scene” [0058]) a system for performing one or more deep learning operations; (Ramanana; [0096] “The machine-learning method and the generating method solve this technical problem using a Deep Learning based approach.”) a system for performing one or more generative AI operations; (Ramanana; [0034] “The model comprises a scene encoder and a generative image model.”) a system for performing operations using one or more large language models (LLMs); (Vu; [Abstract] “a computer system for tuning large language models”) a system for generating synthetic data; (Ramanana; [0004] “One task of these applications is the generating of realistic 2D images of the 3D scenes”, where “In this context, applications for 3D scene creation are being developed.” [0004]) a system incorporating one or more virtual machines (VMs); (Vu; [0057] “In FIG. 5, computing environment 500 contains… virtual machine set 543”) or a system implemented at least partially using cloud computing resources. (Ramanana; [0193] “The computer program may alternatively be stored and executed on a server of a cloud computing environment”, where “The computer program may comprise instructions executable by a computer” [0193]). As Xiong, Collins, Ramanana, and Vu all teach datasets to train machine learning models, whether by creating training sets or by using the training set to train machine learning models, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to implement their systems in these types of use cases. Claim 20 has substantially similar limitations to claim 18, therefore, will be rejected under the same rationale as claim 18. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALICIA HA whose telephone number is (571)272-3601. The examiner can normally be reached Mon-Thurs 9:00 AM - 6:00 PM, and Fri 9:00 AM - 1:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /ALICIA HA/Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 31, 2024
Application Filed
Feb 24, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month