Prosecution Insights
Last updated: April 19, 2026
Application No. 18/549,617

Machine-Learned Models for Implicit Object Representation

Final Rejection §103
Filed
Sep 08, 2023
Examiner
TUNG, KEE M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
8%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
18%
With Interview

Examiner Intelligence

Grants only 8% of cases
8%
Career Allow Rate
15 granted / 189 resolved
-54.1% vs TC avg
Moderate +11% lift
Without
With
+10.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
201
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims Claims 1-22 are currently pending in this application. Claims 21-22 have been added. Information Disclosure Statement The information disclosure statements (IDSs) submitted on January 26, 2026 is hereby acknowledged. All references have been considered by the examiner. Initialed copies of the PTO-1449 are included in this correspondence. Response to Amendments The applicant amended independent claims 1, 12 and 16 to include features similar to “wherein the first implicit segment representation comprises a plurality of signed distance functions”. Response to Arguments Applicant’s arguments filed on November 10, 2025 have been fully considered but they are not persuasive. R1. The applicant argued on p.11 para.2 that “The paragraph cited in Hilliges simply explains a definition of a signed distance function. Nowhere in Hilliges does it teach or suggest processing a latent code and a plurality of spatial query points to obtain an implicit segment representation, where the implicit segment representation includes a plurality of signed distance functions. Nor is the deficiency cured by any of the other cited references.” The examiner disagrees respectfully. The reference of Bhatnagar teaches that “The key distinction between IP-Net and previous implicit approaches is that it classifies points as belonging to 3 different regions: 0-inside the body, 1-between body and clothing and 2-outside. This allows us to recover two surfaces (inner Sin and outer So), see Fig. 2 and 3.” (Bhatnagar: sec. 3.2 para. 5 L.1-4). Therefore, Bhatnagar teaches the implicit object representation is provided with classifying points in 3 regions: 0 – inside the body, 1 – between body and clothing and 2 – outside clothing. Bhatnagar did not explicitly teach signed distance functions. However, Hilliges teaches that “Each voxel may store a numerical value of a truncated signed distance function which may be zero at a surface represented by the model, positive outside objects represented by the model and negative inside objects represented by the model, where the magnitude of the numerical value is related to depth from the closest surface represented by the model.” (Hilliges: [0053] L.11-17). Therefore, region 0 of Bhatnagar corresponds to a signed distance that is negative for being inside objects of Hilliges; region 1 of Bhatnagar corresponds to zero at a surface represented by the model of Hilliges and region 2 of Bhatnagar corresponds to the positive outside objects represented by the model of Hilliges. Hence, the different regions of Bhatnagar corresponds to the signed distance function that Hilliges teaches. Therefore, the combined teaching of Bhatnagar and Hilliges teaches the concerned features. R2. The features of new claim 21 is disclosed with the combined teaching of Bhatnagar and Hilliges. For details, please see the rejection to the claim below. R3. The feature of new claim 22 is disclosed with additional reference of Li (2020/0175757; IDS). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2(c) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9-12 and 14-21 are rejected under 35 U.S.C. 103 as being unpatentable over Bhatnager et al. (“Combining Implicit Function Learning and Parametric Models for 3D Human Reconstruction”, Computer Vision – ECCV 2020, 16th European Conference Glasgow, UK, August 23-28, 2020 Proceedings, Part II, p.311-329; IDS) in view of Hilliges et al. (2014/0184749). Regarding claim 1, Bhatnagar teaches a computer-implemented method for training a machine-learned model for implicit representation of an object (e.g., In this work, we present methodology that combines detail-rich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing. Bhatnagar: Abstract L.6-9. We introduce IP-Net, a network to generate detailed 3D reconstruction from an unordered sparse point cloud. IP-Net can additionally infer body shape under clothing and the body parts of the SMPL model. Training IP-Net requires supervision on three fronts, i) an outer dressed surface occupancy{directly derived from 3D scans, ii) an inner body surface{we supervise with an optimization based body shape under clothing registration approach and iii) correspondences to the SMPL model{obtained by registering SMPL to scans using custom optimization. Bhatnagar: sec. 3 Method para. 1. Learning implicit functions to model humans has been shown to be powerful but the resulting representations are not amenable to control or reposing which are essential for both animation and inference in computer vision. We have presented methodology to combine expressive implicit function representations and parametric body modelling in order to produce 3D reconstructions of humans that remain controllable even in the presence of clothing. Bhatnagar: sec. 5 Conclusions para. 1), comprising: obtaining, by a computing system comprising one or more computing devices, a latent code descriptive of a shape of the object comprising a first object segments (e.g., PNG media_image1.png 622 976 media_image1.png Greyscale Bhatnagar: sec. 3.2 IP-Net: Overview paras. 1 and 2 IP-Net: Feature Encoding and Fig. 3 reproduced below for reference. PNG media_image2.png 420 834 media_image2.png Greyscale It is obvious that the encoder output is a latent variable of input shape, and it is referred to as a feature vector or code. It can be seen from Fig. 3 that the body includes head, arms and legs. The first object segment can be any portion of the body, for example the torso); determining, by the computing system, a plurality of spatial query points within a three-dimensional space that includes the object (e.g., PNG media_image3.png 330 968 media_image3.png Greyscale Bhatnagar: sec. 3.2 IP-Net: Overview para. 1); processing, by the computing system, the latent code and each of the plurality of spatial query points with a first segment representation portions of a machine-learned implicit object representation model to a first implicit segment representations for the first object segments (e.g., PNG media_image4.png 1160 975 media_image4.png Greyscale Bhatnagar: sec. 3.2 IP-Net: Overview paras. 1-4 and Fig. 3), wherein the first implicit segment representation comprises a plurality of signed distance functions (e.g., The key distinction between IP-Net and previous implicit approaches is that it classifies points as belonging to 3 different regions: 0-inside the body, 1-between body and clothing and 2-outside. This allows us to recover two surfaces (inner Sin and outer So), see Fig. 2 and 3. Bhatnagar: sec. 3.2 para. 5 L.1-4. see 1_1 below); determining, by the computing system based at least in part on the first implicit segment representations (e.g., PNG media_image5.png 350 998 media_image5.png Greyscale Bhatnagar: sec. 3.2 IP-Net: Overview para. 3 IP-Net: Part classification and Fig. 3), an implicit object representation of the object and semantic data indicative of one or more surfaces of the object (e.g., Given sparse 3D point clouds sampled on the surface of a dressed person, we use an Implicit Part Network (IP-Net) to jointly predict the outer 3D surface of the dressed person, the inner body surface, and the semantic correspondences to a parametric body model. Bhatnagar: Abstract L.10-13. PNG media_image6.png 448 824 media_image6.png Greyscale Bhatnagar: sec. 5 Conclusions para. 2); evaluating, by the computing system, a loss function that evaluates a difference between the implicit object representation and ground truth data associated with the object and a difference between the semantic data and the ground truth data associated with the object (e.g., PNG media_image7.png 96 974 media_image7.png Greyscale Bhatnagar: sec. 3.2 IP-Net: Overview para. 6 IP-Net: Losses. PNG media_image8.png 676 981 media_image8.png Greyscale Bhatnagar: sec. 4.1 Dataset paras. 1 and 2); and adjusting, by the computing system, one or more parameters of the machine-learned implicit object representation model based at least in part on the loss function (e.g., In this work, we present methodology that combines detail-rich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing. Bhatnagar: Abstract L.6-9. Our key insight is that since the inner surface (body) can be well approximated by a parametric body model (SMPL), and the predicted body parts constrain the space of possible correspondences, fitting SMPL to the predicted inner surface is very robust. Starting from SMPL fitted to the inner surface, we register it to the outer surface (under an additional displacement model, SMLP+D [25,6]), which in turn allows us to re-pose and re-shape the implicitly reconstructed outer surface. Bhatnagar: sec. 1 Introduction para. 3 L.12-18. Parametric models allow control over the surface and never miss body parts, but feed-forward prediction is hard, and reconstructions lack detail. Learning the implicit functions representing the surface directly is powerful because the output is continuous, details can be preserved better, and complex topologies can be represented. However, the output is not controllable, and can not guarantee that all body parts are reconstructed. Naive fitting of a body model to a reconstructed implicit surface often gets trapped into local minimal when the poses are difficult or clothing occludes the body (see Fig. 4). These observations motivate the design of our hybrid method, which retains the benefits of both representations: i) control, ii) detail, iii) alignment with the input point clouds. Bhatnagar: sec. 2.3 Summary: Implicit vs Parametric Modelling and Fig. 4. We propose the first approach to combine implicit reconstruction with parametric modelling which lifts the details from the implicit reconstruction onto the SMPL+D model [6,25] to obtain an editable surface. We describe our registration using IP-Net predictions next. We use SMPL to denote the parametric model constrained to undressed shapes, and SMPL+D (SMPL plus displacements) to represent details like clothing and hair. Bhatnagar: sec. 3.3 Registering SMPL to IP-Net Predictions para. 1 L.3-8). While Bhatnagar does not explicitly teach, Hilliges teaches: (1_1). wherein the first implicit segment representation comprises a plurality of signed distance functions (e.g., Each voxel may store a numerical value of a truncated signed distance function which may be zero at a surface represented by the model, positive outside objects represented by the model and negative inside objects represented by the model, where the magnitude of the numerical value is related to depth from the closest surface represented by the model. Hilliges: [0053] L.11-17. Therefore, region 0 of Bhatnagar corresponds to a signed distance that is negative for being inside objects of Hilliges; region 1 of Bhatnagar corresponds to zero at a surface represented by the model of Hilliges and region 2 of Bhatnagar corresponds to the positive outside objects represented by the model of Hilliges. Hence, the different regions of Bhatnagar corresponds to the signed distance function that Hilliges teaches); It would have been obvious to a person of ordinary skill in the art at the time of filing the claimed invention to combine the teaching of Hilliges into the teaching of Bhatnager so that the (truncated) signed distance function can be used to determine the position of a voxel on the inside, outside or at the surface of an object. Regarding claim 2, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein the method further comprises: extracting, by the computing system from the implicit object representation, a three-dimensional mesh representation of the object comprising a plurality of polygons (e.g., IP-Net: Surface generation We use marching cubes [28] on our predicted occupancies to generate a triangulated mesh surface. Bhatnagar: sec. 3.2 para. 7 IP-Net: Surface generation). Regarding claim 3, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 2, wherein the method further comprises shading, by the computing system, the plurality of polygons based at least in part on the semantic data (e.g., Given sparse 3D point clouds sampled on the surface of a dressed person, we use an Implicit Part Network (IP-Net) to jointly predict the outer 3D surface of the dressed person, the inner body surface, and the semantic correspondences to a parametric body model. Bhatnagar: Abstract L.10-13). Regarding claim 4, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein the latent code comprises a plurality of shape parameters indicative of a shape of the object and a plurality of pose parameters indicative of a pose of the object (e.g., PNG media_image9.png 842 656 media_image9.png Greyscale Bhatnagar: sec. 3.3 Registering SMPL to IP-Net Predictions). Regarding claim 5, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein: the object comprises a human body (e.g., the human body in Fig. 3 of Bhatnagar); and the first object segments comprises one or more arms segments; a head segment; a body segment comprising a portion of the human body; a full-body segment comprising a human body; a torso segment; a face segment; or one or more leg segments (e.g., For SMPL part correspondences, we manually define 14 parts (left/right forearm, left/right mid-arm, left/right upper-arm, left/right upper leg, left/right mid leg, left/right foot, torso and head) on SMPL mesh and use the fact that our body mesh B, is a template with SMPL-topology registered to the scan; this automatically annotates B with the part labels. Bhatnagar: sec. 4.1 para. 1 L.9-13 and Fig. 3). Regarding claim 6, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein determining the implicit object representation of the object comprises: processing, by the computing system, at least the first implicit segment representations with a fusing portion of the machine-learned implicit object representation model to obtain the implicit object representation and the semantic data indicative of the one or more surfaces of the object (e.g., In this work, we present methodology that combines detail-rich implicit functions and parametric representations in order to reconstruct 3D models of people that remain controllable and accurate even in the presence of clothing. Given sparse 3D point clouds sampled on the surface of a dressed person, we use an Implicit Part Network (IP-Net) to jointly predict the outer 3D surface of the dressed person, the inner body surface, and the semantic correspondences to a parametric body model. We subsequently use correspondences to fit the body model to our inner surface and then non-rigidly deform it (under a parametric body + displacement model) to the outer surface in order to capture garment, face and hair detail. Bhatnagar: Abstract L.6-16.). Regarding claim 7, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein: prior to processing the latent code and each of the plurality of spatial query points, the method comprises respectively determining, by the computing system based at least in part on the plurality of spatial query points, one or more localized point sets for the first object segments, wherein each of the one or more localized point sets comprises a plurality of localized query points (e.g., PNG media_image10.png 470 658 media_image10.png Greyscale ); and wherein processing the latent code and each of the plurality of spatial query points with the one or more segment representation portions comprises, for the first object segments, processing, by the computing system, the latent code and a respective localized point set with a respective segment representation portion of the machine-learned implicit object representation model to obtain an implicit segment representation for a respective object segment (e.g., PNG media_image8.png 676 981 media_image8.png Greyscale Bhatnagar: sec. 4.1 Dataset). Regarding claim 9, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein the ground truth data comprises at least one of: point cloud scanning data of the object (e.g., We train IP-Net on a dataset of 700 scans [2,3] and test on held out 50 scans [1]. We normalize our scans to a bounding box of size 1.6 m. To train IP-Net we need paired data of sparse point clouds (input) and the corresponding outer surface, inner surface and correspondence to SMPL model (output). We generate the sparse point clouds by randomly sampling 5k points on our scans, which we voxelize into a grid of size 128×128×128 for our input. We use the normalized scans directly as our ground truth dressed meshes and use our method for body shape registration under scan to get the corresponding body mesh B. Bhatnagar: sec. 4.1 Dataset para. 1 L.1-8); or a three-dimensional mesh representation of the object (e.g., We use the normalized scans directly as our ground truth dressed meshes and use our method for body shape registration under scan to get the corresponding body mesh B. Bhatnagar: sec. 4.1 Dataset para. 1 L.6-8). Regarding claim 10, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein the implicit object representation comprises one or more signed distance functions (e.g., The key distinction between IP-Net and previous implicit approaches is that it classifies points as belonging to 3 different regions: 0-inside the body, 1-between body and clothing and 2-outside. This allows us to recover two surfaces (inner Sin and outer So), see Fig. 2 and 3. Bhatnagar: sec. 3.2 para. 5 L.1-4. Each voxel may store a numerical value of a truncated signed distance function which may be zero at a surface represented by the model, positive outside objects represented by the model and negative inside objects represented by the model, where the magnitude of the numerical value is related to depth from the closest surface represented by the model. Hilliges: [0053] L.11-17. Therefore, the signed distance can be used to determine if a voxel is inside, outside or at the surface of an object). Regarding claim 11, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein the semantic data comprises a plurality of semantic surface coordinates respectively associated with the plurality of spatial query points, wherein each of the plurality of semantic surface coordinates is indicative of a surface of a three-dimensional mesh representation of the object nearest to a respective spatial query point (e.g., PNG media_image11.png 650 506 media_image11.png Greyscale Bhatnagar: sec. 3.3 Registering SMPL to IP-Net Predictions). Regarding claims 12 and 14-15, the claims are system claims of method claims 1, 9 and 5 respectively. The claims are similar in scope to claims 1, 9 and 5 respectively and they are rejected under similar rationale as claims 1, 9 and 5 respectively. Bhatnagar teaches that “To regress SMPL+D parameters (Option c), we implement a feed forward network that uses a similar encoder as IP-Net, but instead of predicting occupancy and part labels, produces SMPL_D parameters.” (Bhatnagar: sec. 4.3 Comparison to Baselines para. 1 L.9-11). The feed forward network is implemented with a computer system. Regarding claim 16, the claim is a non-transitory computer readable media claim of combination of method claims 1 and 2. The scope of claim 16 is similar to combination of claims 1 and 2 and it is rejected under similar rationale as combination of claims 1 and 2. Bhatnagar teaches that “To regress SMPL+D parameters (Option c), we implement a feed forward network that uses a similar encoder as IP-Net, but instead of predicting occupancy and part labels, produces SMPL_D parameters.” (Bhatnagar: sec. 4.3 Comparison to Baselines para. 1 L.9-11). The feed forward network is implemented with a computer system which includes memory storage storing the program instructions executed with a processing unit to perform the functions of the network. Regarding claims 17-20, the claims are non-transitory computer readable media claims of method claims 3-6 respectively. The claims are similar in scope to claims 3-6 respectively and they are rejected under similar rationale as claims 3-6 respectively. Regarding claim 21, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein the object includes a second object segment (From Fig. 3, it can be seen that the body includes head, arms and legs. The first object segment can be any portion of the body, for example the torso and the second object segment can be the arms or legs), and wherein the method further comprises: processing, by the computing system, the latent code and the plurality of spatial query points with a second segment representation portion of the machine-learned implicit object representation model to obtain a second implicit segment representation for the second object segment (e.g., PNG media_image4.png 1160 975 media_image4.png Greyscale Bhatnagar: sec. 3.2 IP-Net: Overview paras. 1-4 and Fig. 3), and wherein the implicit object representation of the object (e.g., The key distinction between IP-Net and previous implicit approaches is that it classifies points as belonging to 3 different regions: 0-inside the body, 1-between body and clothing and 2-outside. This allows us to recover two surfaces (inner Sin and outer So), see Fig. 2 and 3. Bhatnagar: sec. 3.2 para. 5 L.1-4) is further determined based on the second implicit segment representation (e.g., the point classification to the 3 different regions of the arms or legs segments that ate different from the torso segment). Claim(s) 8 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bhatnager in view of Hilliges as applied to claim 1 (12) and further in view of Atzmon et al. (“SAL: Sign Agnostic Learning of Shapes from Raw Data”, Conference on Computer Vision and Pattern Recognition, Virtual Only, June 14-19, 2020, p. 2565-2574; IDS). Regarding claim 8, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein the machine-learned implicit object representation model comprises one or more multi-layer perceptrons (e.g., To regress SMPL+D parameters (Option c), we implement a feed forward network that uses a similar encoder as IP-Net, but instead of predicting occupancy and part labels, produces SMPL+D parameters. Bhatnagar: sec. 4.3 Comparison to Baselines para. 1 L.9-11. See 8_1 below). While the combined teaching of Bhatnagar and Hilliges does not explicitly teach, Atzmon teaches: (8_1). the machine-learned implicit object representation model comprises one or more multi-layer perceptrons (e.g., PNG media_image12.png 338 398 media_image12.png Greyscale Atzmon: sec. 1 Introduction para. 1. Therefore, the feed forward network of Bhatnagar is implemented with a multi-layer perceptron taking the benefit in using neural network as implicit representation to surfaces due to the flexibility and approximation power as well as their efficient optimization and generalization properties). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Atzmon into the combined teaching of Bhatnagar and Hilliges due to the flexibility and approximation power in using the multi-layer perceptron neural network as implicit representation to surfaces. Regarding claim 13, the claim is a system claim of method claim 8. The claim is similar in scope to claim 8 and it is rejected under similar rationale as claim 8. Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bhatnager in view of Hilliges as applied to claim 1 and further in view of Li et al. (2020/0175757; IDS). Regarding claim 22, the combined teaching of Bhatnagar and Hilliges teaches the computer-implemented method of claim 1, wherein the loss function includes a binary cross-entropy error (BCE) loss (e.g., PNG media_image13.png 424 568 media_image13.png Greyscale Bhatnagar: sec. 4.3 para. 1. See 22_1 below). While the combined teaching of Bhatnagar and Hilliges does not explicitly teach, Li teaches: (22_1). wherein the loss function includes a binary cross-entropy error (BCE) loss (e.g., An example loss function to train the network weights includes reconstruction errors for an occupancy field and a flow field, as well as Kullback-Leibler (KL) divergence loss. Binary Cross-Entropy (BCE) loss may be used for the reconstruction of occupancy fields. The standard BCE loss is PNG media_image14.png 82 400 media_image14.png Greyscale where ν denotes the uniformly sampled grids, |ν| may be the total number of grids, Oi ∈ {0,1} may be the ground-truth occupancy field value at a voxel νi, and Ôi may be the value predicted by the network and may be in the range of [0, 1]. Li: [0054] L.1-11. Therefore, the error in registration can similar be given by the loss function as Binary Cross-Entropy (BCE) loss. Therefore, the various reconstruction method are compared with error given by the binary cross-entropy (BCE) loss). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Li into the combined teaching of Bhatnagar and Hilliges so that the reconstruction errors are compared for various method with a loss function of binary cross-entropy (BCE) loss. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SING-WAI WU whose telephone number is (571)270-5850. The examiner can normally be reached 9:00am - 5:30pm (Central Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SING-WAI WU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 08, 2023
Application Filed
Aug 07, 2025
Non-Final Rejection — §103
Nov 03, 2025
Applicant Interview (Telephonic)
Nov 03, 2025
Examiner Interview Summary
Nov 10, 2025
Response Filed
Jan 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597174
METHOD AND APPARATUS FOR DELIVERING 5G AR/MR COGNITIVE EXPERIENCE TO 5G DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12591304
SYSTEMS AND METHODS FOR CONTEXTUALIZED INTERACTIONS WITH AN ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586311
APPARATUS AND METHOD FOR RECONSTRUCTING 3D HUMAN OBJECT BASED ON MONOCULAR IMAGE WITH DEPTH IMAGE-BASED IMPLICIT FUNCTION LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12537877
MANAGING CONTENT PLACEMENT IN EXTENDED REALITY ENVIRONMENTS
2y 5m to grant Granted Jan 27, 2026
Patent 12530797
PERSONALIZED SCENE IMAGE PROCESSING METHOD, APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
8%
Grant Probability
18%
With Interview (+10.6%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month