Prosecution Insights
Last updated: April 19, 2026
Application No. 18/751,043

CONTROLLABLE 3D STYLE TRANSFER FOR RADIANCE FIELDS

Non-Final OA §DP
Filed
Jun 21, 2024
Examiner
HE, YINGCHUN
Art Unit
2613
Tech Center
2600 — Communications
Assignee
ETH ZÜRICH
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
529 granted / 644 resolved
+20.1% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 644 resolved cases

Office Action

§DP
DETAILED ACTION *Note in the following document: 1. Texts in italic bold format are limitations quoted either directly or conceptually from claims/descriptions disclosed in the instant application. 2. Texts in regular italic format are quoted directly from cited reference or Applicant’s arguments. 3. Texts with underlining are added by the Examiner for emphasis. 4. Texts with 5. Acronym “PHOSITA” stands for “Person Having Ordinary Skill In The Art”. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim(s) 1-9 and 11-20 is/are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1-20 of copending Application No. 18/751,038. Claim 10 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim(s) 1, 7 of copending Application No. 18/751,038 in view of Yin et al. (US 2023/0074420 A1) and Zhang et al. (ARF: Artistic Radiance Fields, Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXI Pages 717 – 733 https://doi.org/10.1007/978-3-031-19821-2_41). Although the claims at issue are not identical, they are not patentably distinct from each other because the claim of the instant application is either anticipated by, or the obvious variation of, the claim of copending application, as shown in the table below. Instant Application: Co-pending Application: 18/751,038 1. A computer-implemented method for performing artistic style transfer, the method comprising: converting a first style sample into a first set of features; determining one or more two-dimensional (2D) style masks associated with the style sample1; determining a set of content samples corresponding to a plurality of views of a 3D scene; for each content sample included in the set of content samples: converting the content sample into an additional set of features; determining one or more two-dimensional content masks associated with the content sample1; and determining a set of matches between (i) one or more subsets of the additional set of features corresponding to the one or more 2D content masks2 and (ii) one or more subsets of the first set of features corresponding to the one or more 2D style masks2; and generating a style transfer result that includes a representation of the 3D scene based on one or more losses associated with the sets of matches determined for the set of content samples, wherein the style transfer result comprises one or more structural elements of the 3D scene and one or more stylistic elements of the first style sample at one or more locations corresponding to the one or more 2D content masks. 1. A computer-implemented method for performing style transfer, the method comprising: converting a style sample into a first set of semantic features and a first set of visual features; determining a set of content samples corresponding to a plurality of views of a three-dimensional (3D) scene; for each content sample included in the set of content samples: converting the content sample into an additional set of semantic features and an additional set of visual features; and determining a set of matches between (i) the additional set of semantic features and the additional set of visual features and (ii) the first set of semantic features and the first set of visual features; and generating a style transfer result that includes a representation of the 3D scene based on one or more losses associated with the sets of matches determined for the set of content samples, wherein the style transfer result comprises one or more structural elements of the 3D scene and one or more stylistic elements of the style sample. 7. The computer-implemented method of claim 1, wherein determining the set of matches comprises: determining a set of two-dimensional (2D) masks associated with the set of content samples and an additional 2D mask associated with the style sample1; and determining the sets of matches between (i) a subset of the additional set of semantic features and the additional set of visual features associated with the set of 2D masks and (ii) a subset of the first set of semantic features and the first set of visual features associated with the 2D mask2. 2. The computer-implemented method of claim 1, wherein each of the one or more 2D content masks is associated with a set of pixels included in each content sample and a label. 8 3. The computer-implemented method of claim 2, wherein the label is associated with an artistic style to be transferred from the style sample to one or more pixels included in the content sample. 8 4. The computer-implemented method of claim 1, wherein the representation of the 3D scene comprises a radiance field function. 10 5. The computer-implemented method of claim 4, wherein generating the style transfer result comprises iteratively modifying one or more parameters included in the radiance field function based on the one or more losses. 6 6. The computer-implemented method of claim 1, wherein the set of matches is determined based on a distance that is computed using (i) a first set of semantic features included in the one or more subsets of the additional set of features, (ii) a first set of visual features included in the one or more subsets of the additional set of features, (iii) a second set of semantic features included in the first set of features, and (iv) a second set of visual features included in the first set of features. 2,3 7. The computer-implemented method of claim 1, wherein the one or more 2D content masks are determined based on one or more of visual features included in the additional set of features, semantic features included in the additional set of features, or user annotations associated with the content sample. 2 8. The computer-implemented method of claim 1, wherein determining the set of matches comprises: matching a first 2D content mask included in the one or more 2D content masks to the first style sample; and determining a first subset of the set of matches between a first subset of the additional set of features corresponding to the first 2D content mask and the first set of features. 7 9. The computer-implemented method of claim 8, wherein determining the set of matches further comprises: matching a second 2D content mask included in the one or more 2D content masks to a second style sample; and determining a second subset of the set of matches between a second subset of the additional set of features corresponding to the second 2D content mask and a second set of features associated with the second style sample. 7,8 10. The computer-implemented method of claim 1, wherein the one or more losses comprise at least one of an L2 loss or a cosine distance. 11 11, 15 12 10 13 14 14 12, 13 15 12 16 15 17 15,16 18 18, 19 19 7 20 16,17 This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Regarding Claim 10, Co-pending Application: 18/751,038 fails to disclose wherein the one or more losses comprise at least one of an L2 loss or a cosine distance. However Zhang discloses computing cosine distance when determining losses (p.722: Section 4.1: Style Transfer Loasses). Zhang and the copending application are in the same field of artistic style transfer using NeRF. Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Zhang to implement the calculating for the NNFM loss as taught by Zhang. Allowable Subject Matter Claims 1-20 would be allowable if above Double Patent Rejection is overcome by filing Terminal Disclaimer and approved. The following is a statement of reasons for the indication of allowable subject matter: Prior art, either individually or in combination, fails to disclose or render obviousness the limitation of converting a first style sample into a first set of features; determining one or more two-dimensional (2D) style masks associated with the style sample; determining a set of content samples corresponding to a plurality of views of a 3D scene; for each content sample included in the set of content samples: converting the content sample into an additional set of features; determining one or more two-dimensional content masks associated with the content sample; and determining a set of matches between (i) one or more subsets of the additional set of features corresponding to the one or more 2D content masks and (ii) one or more subsets of the first set of features corresponding to the one or more 2D style masks; and generating a style transfer result that includes a representation of the 3D scene based on one or more losses associated with the sets of matches determined for the set of content samples, wherein the style transfer result comprises one or more structural elements of the 3D scene and one or more stylistic elements of the first style sample at one or more locations corresponding to the one or more 2D content masks as claimed in independent claims 1/11/18. The closest prior art, Zhang et al. (ARF: Artistic Radiance Fields, Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXI Pages 717 – 733 https://doi.org/10.1007/978-3-031-19821-2_41), discloses Artistic Radiance Fields (ARF), a new approach to transferring artistic features from a single 2D image to a full, real-world 3D scene by utilizing a nearest neighbor-based loss that is highly effective at capturing style details while maintaining multi-view consistency. However Zhang fails to disclose applying one or more 2D content masks and one or more 2D style masks. Zhi et al. (“In-Place Scene Labelling and Understanding with Implicit Scene Representation”, Aug. 21, 2021) discloses extending neural radiance fields (NeRF) to jointly encode semantics with appearance and geometry. However no prior art discloses generating 2D masks and using the masks for matching during style transfer. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yin et al. (US 2023/0074420 A1) discloses One or more style transfer networks may be used for part-aware style transformation of both geometric features and textural components of a source asset to a target asset. The source asset may be segmented into particular parts and then ellipsoid approximations may be warped according to correspondence of the particular parts to the target assets. Moreover, a texture associated with the target asset may be used to warp or adjust a source texture, where the new texture can be applied to the warped parts (Abstract). Hao et al. (US 2022/0180602 A1) discloses Apparatuses, systems, and techniques are presented to generate images. In at least one embodiment, one or more neural networks are used to generate one or more images based, at least in part, upon one or more semantic features projected from a three-dimensional environment (Abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUN HE whose telephone number is (571)270-7218. The examiner can normally be reached M-F 8:00-5:00 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YINGCHUN HE/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Jun 21, 2024
Application Filed
Mar 05, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602886
LOW LATENCY HAND-TRACKING IN AUGMENTED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12588711
METHOD AND APPARATUS FOR OUTPUTTING IMAGE FOR VIRTUAL REALITY OR AUGMENTED REALITY
2y 5m to grant Granted Mar 31, 2026
Patent 12586247
IMAGE DISTORTION CALIBRATION DEVICE, DISPLAY DEVICE AND DISTORTION CALIBRATION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586491
Display Device and Method for Driving the Same
2y 5m to grant Granted Mar 24, 2026
Patent 12579949
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+14.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 644 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month