Prosecution Insights
Last updated: April 19, 2026
Application No. 18/746,911

CONTEXT-AWARE SYNTHESIS AND PLACEMENT OF OBJECT INSTANCES

Non-Final OA §102§103§DP
Filed
Jun 18, 2024
Examiner
WANG, YUEHAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
404 granted / 485 resolved
+21.3% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
47 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 485 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim(s) 1, 5, 8 and 15 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1, 2, 11, 19 and 25 of U.S. Patent No. US 12462453 B2 (reference patent). Although the claims at issue are not identical, they are not patentably distinct from each other because both of claims are essentially the same structure and perform essentially the same function, therefore unpatentable for obvious-type double patenting. The following table illustrates the conflicting claim pairs: Instant Appl. 1 5 8 15 reference patent 1 2 & 11 19 25 Claims of the instant application are compared to claims of Reference Patent in the following tables. Instant Application reference patent 1. A processor, comprising: one or more circuits to use one or more neural networks to add one or more first objects to an image with a pose based, at least in part on, one or more second objects in the image. 1. One or more processors, comprising: circuitry to use one or more neural networks to: identify a set of locations in an image into which to insert an object based, at least in part, on a segmentation map and a location of one or more other objects within the image; identify a shape of the object based, at least in part, on the set of locations; and insert the object into at least one of the set of locations based, at least in part, on the identified shape. Instant Application reference patent 5. The processor of claim 1, wherein the one or more neural networks are to generate one or more affine transformations representing one or more first bounding boxes for the one or more first objects. 2. The one or more processors of claim 1, wherein the circuitry is further to: use the one or more neural networks to generate a transformation corresponding to a bounding box associated with at least one of the set of locations; and insert the object into the image based, at least in part, on the transformation. 11. The system of claim 7, wherein the one or more processors are further to identify the at least one of the set of locations through one or more affine transformations, and wherein the one or more affine transformations include at least one of: a translation, scaling, and rotation. Claims 8 and 15 are rejected on the ground of nonstatutory double patenting for the same reason as claims 1. Claim(s) 1, 2, 8, 9, 15 and 16 is/are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1, 2, 7, 8, 13 and 14 of copending Application No. 16922214 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because both of claims are essentially the same structure and perform essentially the same function, therefore unpatentable for obvious-type double patenting. The following table illustrates the conflicting claim pairs: Instant Appl. 1 2 8 9 15 16 reference patent 1 2 7 8 13 14 Claims of the instant application are compared to claims of Reference Patent in the following tables. Instant Application reference patent 1. A processor, comprising: one or more circuits to use one or more neural networks to add one or more first objects to an image with a pose based, at least in part on, one or more second objects in the image. 1. A processor, comprising: one or more circuits to use one or more neural networks to add one or more first objects to an image including one or more second objects, wherein one or more poses of the one or more first objects in the image is determined with respect to the one or more second objects. Instant Application reference patent 2. The processor of claim 1, wherein the one or more neural networks include one or more variational autoencoders (VAEs) to determine vectors for the second objects and encode the vectors to a latent space to act as a constraint in adding the one or more first objects to the image. 2. The processor of claim 1, wherein the one or more neural networks include one or more variational autoencoders (VAEs) to determine features for the first objects and the second objects and encode those features to a latent space to act as a constraint in adding the one or more first objects to the image. Claims 8, 9, 15 and 16 are rejected on the ground of nonstatutory double patenting for the same reason as claims 1 and 2. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 3-6, 8, 14, 15, 17 and 18 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Cohen et al. (US 20190196698 A1), referred herein as Cohen. Regarding Claim 1, Cohen teaches a processor, comprising (Cohen [0005] A directed user conversation is processed, e.g., with a natural language processor): one or more circuits to use one or more neural networks to add one or more first objects to an image with a pose based, at least in part on, one or more second objects in the image. (Cohen [0021] The image to be edited is processed by a vision module that can be specific to an object, such as an object to be replaced. For instance, a sky vision module including a neural network trained to identify skies is used to ascertain pixels of a sky in an image when an object to be replaced in the image is identified as a sky, such as for the replace request “Replace the boring sky with a cloudy sky”; [0043] object data 130 (e.g., data representing objects in images, such as pixels of objects to be removed or replaced, masks of objects to be removed or replaced, locations of objects in image to be edited 106; [0047] a move request (to move an object within an image), a duplicate request (to duplicate an object in an image), an add request (to add an object in an image); [0183] processors may be comprised of semiconductor(s) and transistors (e.g., electronic integrated circuits (ICs))). Regarding Claim 3, Cohen teaches the processor of claim 1, and further teaches wherein the one or more neural networks are to further add the one or more first objects to the image based, at least in part, on labels associated with pixels of the image (Cohen [0061] vision module 146 has semantic understanding of an image to be edited, and is therefore able to distinguish between objects in the image based on a description of the object in an editing query). Regarding Claim 4, Cohen teaches the processor of claim 1, and further teaches wherein the one or more neural networks are to add the one or more first objects with one or more shapes based, at least in part, on locations in the image identified by the one or more neural networks to which the one or more first objects are to be inserted (Cohen [0116] A mask can describe a shape of an object without including content from the image to be edited. Vision module 146 can generate an object mask for an object, and from the object mask generate a refined mask by dilating the object mask to create a region bounded by a boundary of the object mask and separating a background from a foreground for pixels in the region). Regarding Claim 5, Cohen teaches the processor of claim 1, and further teaches wherein the one or more neural networks are to generate one or more affine transformations representing one or more first bounding boxes for the one or more first objects (Cohen [0047] including a move request (to move an object within an image; [0153] ascertaining the pixels of the image can include generating an object mask for the object, dilating the object mask to create a region bounded by a boundary of the object mask, and generating a refined mask representing the pixels of the image corresponding to the object). Regarding Claim 6, Cohen teaches the processor of claim 1, and further teaches wherein the pose of the one or more first objects is identified based, at least in part, on one or more affine transformation matrices applied to one or more second bounding boxes in the image, the one or more second bounding boxes generated by the one or more neural networks (Cohen [0129] Compositing module 152 can process fill material or replacement material in any way to composite it with an image to be edited. For instance, compositing module 152 can extract fill material or replacement material from an image obtained by image search module 150, filter the material (e.g., adjust color, brightness, contrast, apply a filter, and the like), re-size the material (e.g., interpolate between pixels of the material, decimate pixels of the material, or both, to stretch or squash the material), rotate the material, crop the material, composite the material with itself or other fill or replacement material, and the like) Regarding Claim 8, Cohen teaches a system (Cohen Abst: Systems and techniques are described herein for directing a user conversation to obtain an editing query, and removing and replacing objects in an image based on the editing query; [0005] A directed user conversation is processed, e.g., with a natural language processor). The metes and bounds of the rest of the limitations substantially correspond to the claim as set forth in Claim 1; thus they are rejected on similar grounds and rationale as their corresponding limitations. Regarding Claim 14, Cohen teaches the system of claim 8, and further teaches wherein the one or more neural networks are to generate the one or more first objects with a shape based, at least in part, on a location and scale indicated by an affine transformation applied to a region of the image (Cohen [0116] A mask can describe a shape of an object without including content from the image to be edited; [0195] Platform 818 may also serve to abstract scaling of resources to provide a corresponding level of scale to encountered demand for resources 820 that are implemented via platform 818). Regarding Claim 15, Cohen teaches a method (Cohen Abst: Systems and techniques are described herein for directing a user conversation to obtain an editing query, and removing and replacing objects in an image based on the editing query; [0005] A directed user conversation is processed, e.g., with a natural language processor). The metes and bounds of the rest of the limitations substantially correspond to the claim as set forth in Claim 1; thus they are rejected on similar grounds and rationale as their corresponding limitations. Regarding Claim 17, Cohen teaches the method of claim 15, and further teaches further comprising adding the one or more first objects to the image based, at least in part, on labels of a semantic representation of the image (Cohen [0061] vision module 146 has semantic understanding of an image to be edited, and is therefore able to distinguish between objects in the image based on a description of the object in an editing query). Regarding Claim 18, Cohen teaches the method of claim 15, and further teaches further comprising identifying the pose of the one or more first objects using one or more affine transformations applied to one or more bounding boxes in the image, the one or more bounding boxes generated by the one or more neural networks, and generating the one or more first objects based, at least in part, on the one or more affine transformations (Cohen [0116] A mask can describe a shape of an object without including content from the image to be edited. Vision module 146 can generate an object mask for an object, and from the object mask generate a refined mask by dilating the object mask to create a region bounded by a boundary of the object mask and separating a background from a foreground for pixels in the region; [0089] indicators of regions in which harmonization is done, and the like, used by or calculated by harmonizing module 154 are stored in storage 126, such as in image data 138, and made available to modules of image enhancement application 120; [0129] Compositing module 152 can process fill material or replacement material in any way to composite it with an image to be edited. For instance, compositing module 152 can extract fill material or replacement material from an image obtained by image search module 150, filter the material (e.g., adjust color, brightness, contrast, apply a filter, and the like), re-size the material (e.g., interpolate between pixels of the material, decimate pixels of the material, or both, to stretch or squash the material), rotate the material, crop the material, composite the material with itself or other fill or replacement material, and the like) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 9, 12, 13 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cohen et al. (US 20190196698 A1), referred herein as Cohen in view of Madani et al. (US 20190197358 A1), referred herein as Madani and Muller et al. (US 20200035016 A1), referred herein as Muller. Regarding Claim 2, Cohen teaches the processor of claim 1, but does not teach all the claimed limitation therein. However, Cohen in view of Madani and Muller teaches wherein the one or more neural networks include one or more variational autoencoders (VAEs) to determine vectors for the second objects and encode the vectors to a latent space to act as a constraint in adding the one or more first objects to the image (Madani [0027] VAEs attempt to find the variational lower bound of the probability density function with a loss function that consists of a reconstruction error and regularizer; [0029] FIG. 1 is an example block diagram of a generative adversarial network (GAN). As shown in FIG. 1, the generator, G, takes a vector z, sampled from random Gaussian noise or conditioned with structured input; Cohen [0069] Hence, similarity scores determined from word vectors can be used to determine an object to be replaced or removed from an image; Muller [0022] perform a mapping of multi-dimensional input vector 130/230 to a latent space as an invertible compound function of the form: ĥ=h.sub.L° . . . °h.sub.2°h.sub.1, where each h.sub.i applies a piecewise-polynomial bijective transformation or piecewise-polynomial warp. It is noted that for the purposes of the present inventive principles, h is stable invertible with computationally tractable Jacobians. That constraint enables exact and fast inference of latent variables and consequently exact and fast probability density estimation). Madani discloses a machine learning training model that trains an image generator of a generative adversarial network (GAN) to generate medical images approximating actual medical images, which is analogous to the present patent application. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have modified Cohen to incorporate the teachings of Madani, and applying the discriminator and VAE on machine learning system into the methods, systems, and techniques for directing a user conversation to obtain an editing query, and removing and replacing objects in an image based on the editing query. Doing so would provide an architecture that can be trained using both labeled and unlabeled image data that are equally applicable regardless of the particular type of image data being operated on in the method and system for joint synthesis and placement of objects in scenes. Muller discloses a system memory storing a software code including multiple artificial neural networks (ANNs), which is analogous to the present patent application It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have modified Cohen to incorporate the teachings of Muller, and apply the piecewise-polynomial transformation for vector data, as taught by Muller into the methods, systems, and techniques for classifying an object-of-interest using an artificial neural network. Doing so would improve the performance of rendering applications using warp-predicting neural networks having piecewise-polynomial coupling layers in the method and system of a context-aware synthesis and placement of object instances. Regarding Claim 9, Cohen teaches the system of claim 8. The metes and bounds of the rest of the limitations substantially correspond to the claim as set forth in Claim 2; thus they are rejected on similar grounds and rationale as their corresponding limitations. Regarding Claim 12, Cohen teaches the system of claim 8, but does not teach all the claimed limitation therein. However, Cohen in view of Madani and Muller teaches wherein the one or more neural networks includes one or more first variational autoencoders (VAEs) to encode vectors of the second objects to a latent space and one or more second VAEs to generate the one or more first objects based, at least in part, on the vectors (Madani [0027] VAEs attempt to find the variational lower bound of the probability density function with a loss function that consists of a reconstruction error and regularizer; [0029] FIG. 1 is an example block diagram of a generative adversarial network (GAN). As shown in FIG. 1, the generator, G, takes a vector z, sampled from random Gaussian noise or conditioned with structured input; Muller [0022] perform a mapping of multi-dimensional input vector 130/230 to a latent space as an invertible compound function of the form: ĥ=h.sub.L° . . . °h.sub.2°h.sub.1, where each h.sub.i applies a piecewise-polynomial bijective transformation or piecewise-polynomial warp. It is noted that for the purposes of the present inventive principles, h is stable invertible with computationally tractable Jacobians. That constraint enables exact and fast inference of latent variables and consequently exact and fast probability density estimation). The same motivation as claim 2 applies here. Regarding Claim 13, Cohen teaches the system of claim 8, but does not teach all the claimed limitation therein. However, Cohen in view of Madani teaches wherein the one or more neural networks includes one or more variational autoencoders (VAEs) to receive a semantic representation of the image and a random input and to generate a latent vector using the semantic representation and the random input (Madani [0027] VAEs attempt to find the variational lower bound of the probability density function with a loss function that consists of a reconstruction error and regularizer; [0029] FIG. 1 is an example block diagram of a generative adversarial network (GAN). As shown in FIG. 1, the generator, G, takes a vector z, sampled from random Gaussian noise or conditioned with structured input; Cohen [0061] vision module 146 has semantic understanding of an image to be edited, and is therefore able to distinguish between objects in the image based on a description of the object in an editing query; Muller [0022] perform a mapping of multi-dimensional input vector 130/230 to a latent space as an invertible compound function of the form: ĥ=h.sub.L° . . . °h.sub.2°h.sub.1, where each h.sub.i applies a piecewise-polynomial bijective transformation or piecewise-polynomial warp. It is noted that for the purposes of the present inventive principles, h is stable invertible with computationally tractable Jacobians. That constraint enables exact and fast inference of latent variables and consequently exact and fast probability density estimation). The same motivation as claim 2 applies here. Regarding Claim 16, Cohen teaches the method of claim 15. The metes and bounds of the rest of the limitations substantially correspond to the claim as set forth in Claim 2; thus they are rejected on similar grounds and rationale as their corresponding limitations. Claim(s) 7, 10, 11 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cohen et al. (US 20190196698 A1), referred herein as Cohen in view of Madani et al. (US 20190197358 A1), referred herein as Madani. Regarding Claim 7, Cohen teaches the processor of claim 1, but does not teach all the claimed limitation therein. However, Cohen in view of Madani teaches wherein the one or more neural networks includes one or more variational autoencoders (VAEs) comprising one or more decoder portions to generate one or more shapes of the one or more first objects that fit into one or more first bounding boxes represented by affine transformations (Madani [0027] VAEs attempt to find the variational lower bound of the probability density function with a loss function that consists of a reconstruction error and regularizer; [0029] FIG. 1 is an example block diagram of a generative adversarial network (GAN). As shown in FIG. 1, the generator, G, takes a vector z, sampled from random Gaussian noise or conditioned with structured input; [0032] standard augmentation methods that produce new examples of data merely involve varying lighting, field of view, and spatial rigid transformations; Cohen [0116] A mask can describe a shape of an object without including content from the image to be edited. Vision module 146 can generate an object mask for an object, and from the object mask generate a refined mask by dilating the object mask to create a region bounded by a boundary of the object mask and separating a background from a foreground for pixels in the region; [0089] indicators of regions in which harmonization is done, and the like, used by or calculated by harmonizing module 154 are stored in storage 126, such as in image data 138, and made available to modules of image enhancement application 120). The same motivation as claim 2 applies here. Regarding Claim 10, Cohen teaches the system of claim 8, but does not teach all the claimed limitation therein. However, Cohen in view of Madani teaches wherein the one or more neural networks include one or more spatial transformer networks (STNs) to generate one or more affine transformations representing one or more bounding boxes for the one or more second objects (Madani [0032] standard augmentation methods that produce new examples of data merely involve varying lighting, field of view, and spatial rigid transformations; Cohen [0047] including a move request (to move an object within an image; [0153] ascertaining the pixels of the image can include generating an object mask for the object, dilating the object mask to create a region bounded by a boundary of the object mask, and generating a refined mask representing the pixels of the image corresponding to the object). The same motivation as claim 2 applies here. Regarding Claim 11, Cohen teaches the system of claim 8, but does not teach all the claimed limitation therein. However, Cohen in view of Madani teaches wherein the one or more neural networks includes one or more spatial transformer networks (STNs) to convert one or more vectors generated by one or more variational autoencoders (VAEs) of the one or more neural networks into one or more affine transformations (Madani [0027] VAEs attempt to find the variational lower bound of the probability density function with a loss function that consists of a reconstruction error and regularizer; [0029] FIG. 1 is an example block diagram of a generative adversarial network (GAN). As shown in FIG. 1, the generator, G, takes a vector z, sampled from random Gaussian noise or conditioned with structured input; Cohen [0047] including a move request (to move an object within an image; [0116] A mask can describe a shape of an object without including content from the image to be edited. Vision module 146 can generate an object mask for an object, and from the object mask generate a refined mask by dilating the object mask to create a region bounded by a boundary of the object mask and separating a background from a foreground for pixels in the region). The same motivation as claim 2 applies here. Regarding Claim 19, Cohen teaches the method of claim 15, but does not teach all the claimed limitation therein. However, Cohen in view of Madani teaches further comprising using one or more spatial transformer networks (STNs) of the one or more neural networks to convert one or more vectors generated by one or more variational autoencoders (VAEs) of the one or more neural networks into one or more affine transformations (Madani [0027] VAEs attempt to find the variational lower bound of the probability density function with a loss function that consists of a reconstruction error and regularizer; [0029] FIG. 1 is an example block diagram of a generative adversarial network (GAN). As shown in FIG. 1, the generator, G, takes a vector z, sampled from random Gaussian noise or conditioned with structured input; Cohen [0047] including a move request (to move an object within an image; [0116] A mask can describe a shape of an object without including content from the image to be edited. Vision module 146 can generate an object mask for an object, and from the object mask generate a refined mask by dilating the object mask to create a region bounded by a boundary of the object mask and separating a background from a foreground for pixels in the region). The same motivation as claim 2 applies here. Claim(s) 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cohen et al. (US 20190196698 A1), referred herein as Cohen in view of Muller et al. (US 20200035016 A1), referred herein as Muller. Regarding Claim 20, Cohen teaches the method of claim 15, but does not teach all the claimed limitation therein. However, Cohen in view of Muller teaches further comprising using the one or more neural networks to add the one or more first objects to the image based, at least in part, on one or more bounding boxes generated by the one or more neural networks at the image and a shape of the one or more first objects that is identified by transforming one or more vectors in latent space (Cohen [0047] including a move request (to move an object within an image; [0153] ascertaining the pixels of the image can include generating an object mask for the object, dilating the object mask to create a region bounded by a boundary of the object mask, and generating a refined mask representing the pixels of the image corresponding to the object; [0069] Hence, similarity scores determined from word vectors can be used to determine an object to be replaced or removed from an image; [0116] A mask can describe a shape of an object without including content from the image to be edited; [0195] Platform 818 may also serve to abstract scaling of resources to provide a corresponding level of scale to encountered demand for resources 820 that are implemented via platform 818; Muller [0022] perform a mapping of multi-dimensional input vector 130/230 to a latent space as an invertible compound function of the form: ĥ=h.sub.L° . . . °h.sub.2°h.sub.1, where each h.sub.i applies a piecewise-polynomial bijective transformation or piecewise-polynomial warp. It is noted that for the purposes of the present inventive principles, h is stable invertible with computationally tractable Jacobians. That constraint enables exact and fast inference of latent variables and consequently exact and fast probability density estimation). The same motivation as claim 2 applies here. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached on Monday-Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Samantha (YUEHAN) WANG/ Primary Examiner Art Unit 2617
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
Jan 19, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597178
VECTOR OBJECT PATH SEGMENT EDITING
2y 5m to grant Granted Apr 07, 2026
Patent 12597506
ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586286
DIFFERENTIABLE REAL-TIME RADIANCE FIELD RENDERING FOR LARGE SCALE VIEW SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12586261
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567182
USING AUGMENTED REALITY TO VISUALIZE OPTIMAL WATER SENSOR PLACEMENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+12.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 485 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month