Prosecution Insights
Last updated: April 19, 2026
Application No. 18/809,719

APPARATUS AND METHOD FOR TRANSFERRING STYLE OF BUILDING MODEL TEXTURE

Non-Final OA §103
Filed
Aug 20, 2024
Examiner
CHEN, BIAO
Art Unit
2611
Tech Center
2600 — Communications
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
27 granted / 32 resolved
+22.4% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
69.1%
+29.1% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 32 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The priority document for foreign priority KR10-2024-0071541 is available. However, the priority document for foreign priority KR10-2023-0167197 is not available. An attempt by the Office to electronically retrieve, under the priority document exchange program, the foreign application 10-2023-0167197 to which priority is claimed has FAILED on 04/27/2025. Useful information is provided at the Electronic Priority Document Exchange (PDX) Program Website (https://www.uspto.gov/patents/basics/international-protection/electronic-priority-document-exchange-pdx), including practice tips for priority document exchange (https://www.uspto.gov/patents/basics/international-protection/electronic-priority-document-exchange-pdx#Practice2620tips). Drawings The drawings are objected to because: (1) there are no labels 10 and 20 in FIG. 4 while a polygon layout image 10 and a facade image 20 are mentioned in paras. [0094]-[0097] of the specification of this application; and (2) the numbers or markers are not clear or overlapped to each other. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over Niu et al. (Style Transfer of Image Target Region Based on Deep Learning, 2021 6th International Symposium on Computer and Information Processing Technology (ISCIPT), pp. 486-489, hereinafter “Niu”) in view of Zhang et al. (Size-Adaptive Texture Atlas Generation and Remapping for 3D Urban Building Models, ISPRS Int. J. Geo-Inf. 2021, 10, 798. https://doi.org/10.3390/ijgi10120798, MDPI, hereinafter “Zhang”), Liu et al. (DeepFacade: A Deep Learning Approach to Facade Parsing, Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), pp. 2301-2307, hereinafter “Liu”), and Baran (How to create 3D Scan Based Atlas Textures for surface scattering, YouTube, https://youtu.be/5rB7pqPXxSE?si=TDmUbVQ6ZpffjJIH, hereinafter “Baran”). Regarding claim 1, Niu discloses An apparatus for transferring a building model texture style, comprising: one or more processors; and memory for storing at least one program executed by the one or more processors, wherein the at least one program (page 488, col. left, para. 8, “Hardware environment: The processor is Intel(R) Core(TM) i5-4210H CPU @2.90GHz, 2 cores, 4 logical processors. Software environment: Collaboratory's cloud GPU, Tesla Pl00 with 16g video memory, Tensorflow learning framework. Experiment”). Note that: the hardware and software form the computing apparatus in which memory store software to be executed on CPU and GPU. performs style transfer of the building model image by applying a predefined user-style image to areas corresponding to the mask image and the floor grid area, and … the building model image, a style of which is transferred … (Figure 1: “Content image”, “Style image”, and “Stylized image” for a building style transfer; Figure 2: “target area style transfer network model”, “Conditiona Transfer Network”, “Content image” as the building model image; “Style image” as a predefined user-style image, “Mask matrix” as the floor the mask image and the floor grid area; page 486, col. left, Abstract, “this paper proposes a deep learning-based style transfer framework for image target regions. Firstly, the content map is the main input, and the mask map generated by the image mask technology is used as the specific condition input. Then, the transfer network style transfer area of the specific condition input is only the target area, and the non-target area is not in the style transfer area. Finally, the transfer network style transfer input via specific conditions realizes the image style transfer of the target area.”). Note that: (1) the deep-learning based computing system or apparatus takes a content image (the image to be style-transferred), a mask matrix (a mask image and target area (floor grid area), and a style image as input, and outputs a style transferred image with only the target area transferred; and (2) after the style transfer has been done, the building model image as the output can be regarded as the building model image with a style of which is transferred or updated. However, Niu fails to disclose, but in the same art of computer graphics, Zhang discloses converts 3D building model geometry data and 3D building model texture information into a building model image after receiving the 3D building model geometry data and the 3D building model texture information, … 3D building model … (Zhang, “page 13, para. 2, “it is worth generating 3D building models with adaptive texture atlases for improving the rendering efficiency”; page 1, Abstract, “The traditional texture atlas methods compress all the textures of a model into one atlas, which needs extra blank space, and the size of the atlas is uncontrollable. This paper introduces a size-adaptive texture atlas method that can pack all the textures of a model without losing accuracy and increasing extra storage space.”; page 9, para. 2, “In order to efficiently load and render the massive high-precision building models, the files were processed and organized appropriately. The processing workflow is shown in Figure 8, the specific steps were as follows: (1) Extract the urban scene from the Max file; (2) Split the scene into many individual building models and convert them into OBJ format; Pack the multi-textures into the texture atlases and remap them onto the geometric mesh for each building model; Convert the models into GLB format and organize them by the 3DTiles format; Load and render the urban scene in Cesium.”; Figure 9: “(b) texture atlas”; Figure 10: “(a) original mesh and (b) clipped mesh”; Figure 12: “Textured model rendering in Cesium”) Note that: (1) the 3D building model in corresponding format as 3D building model geometry data are received or loaded; (2) the multi-textures are packed into the 3D texture atlas as 3D building model texture information; and (3) the 3D model and the corresponding 3D texture atlas are rendered or converted into the urban scene building image (a building model image) in Figure 12. When rendering the building front face from a viewpoint at the front of the building, a façade image can be rendered to reflect the patterns of windows and walls. Niu and Zhang are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply rending a building model image as a façade image, as taught by Zhang into Niu. The motivation would have been “our method can significantly improve building model rendering efficiency for large-scale 3D city visualization” (Zhang, page 1, Abstract). The suggestion for doing so would allow to improve building model rendering efficiency. Therefore, it would have been obvious to combine Niu with Zhang. However, Niu in view of Zhang fails to disclose, but in the same art of computer graphics, Liu discloses performs preprocessing for setting an area to which style transfer is to be applied by generating a mask image through segmentation into a window and a wall of the building model image and generating a floor gird area based on the segmentation into the window and the wall, (Liu, page 2301, Abstract, “we propose a deep learning based method for segmenting a facade into semantic categories ... We also propose a method to refine the segmentation results using bounding boxes generated by the Region Proposal Network”; page 2305, col. right, “The ECP dataset consists of 104 images of building facades. The dataset contains the following classes: {window, wall, balcony, door, shop, sky, chimney, roof}. All the images in the ECP dataset contains rectified and cropped facades of Haussmannian style buildings in Paris”; Figure 5: “Qualitative examples on the ECP and eTRIMS dataset”, “(b) bounding boxes” indicating windows, “(e) RPN refinement” (windows in red, walls in yellow, doors in orange, and etc.), and “(k) Our Result” (windows in blue and walls in red)). Note that: (1) using a deep-learning method, windows and walls are segmented out from a building façade image (the building model image); and (2) it obvious to one having ordinary skills in the art that: (a) the segmentation or parsing results of windows and walls form a mask image; and (b) the combination of the rows of windows in red for different floors and the corresponding walls in yellow in Figure 5’s (h) can be regarded as a floor gird area based on the segmentation into the window and the wall and their relative positions. Niu in view of Zhang, and Liu, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply detecting or segmenting windows, walls and other categorized objects from a façade image, as taught by Liu into Niu in view of Zhang. The motivation would have been “we propose a deep learning based method for segmenting a facade into semantic categories.” (Liu, page 2301, Abstract). The suggestion for doing so would allow to segment a facade into semantic categories including widows and wall. Therefore, it would have been obvious to combine Niu, Zhang, and Liu. However, the combination of Niu, Zhang, and Liu fails to disclose, but in the same art of computer graphics, Baran discloses Converts the building model image, a style of which is transferred, into 3D building model texture information. (Baran, page 1, lines 7-8, “In the specific video during 0:40-1:42, Baran shows how to create atlas textures using a taken image.”; page 1 / line 13 – page 2 / line 2, “In details I show how I captured 2 earth clumps, turned them into a high quality atlas texture set which I used to generate totally new PBR environment material of plouhged soil”). Note that: (1) A taken image using a camera can be manipulated, processed, or converted into atlas textures or an atlas image as 3D object model texture information; and (2) the camera-taken image can be substituted by the style transferred building model image above while the 3D apples in the specific video duration can be substituted by 3D building model above to convert the building model image, a style of which is transferred, into 3D building model texture information. The combination of Niu, Zhang, and Liu, and Baran, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply creating or converting an object image into an atlas image for the 3D object, as taught by Baran into the combination of the combination of Niu, Zhang, and Liu. The motivation would have been “turned them into a high quality atlas texture set” (Baran, page 2301, Abstract). The suggestion for doing so would allow to convert a style transferred building model image into an atlas image for 3D building model texture information. Therefore, it would have been obvious to combine Niu, Zhang, Liu, and Baran. Regarding claim 2, the combination of Niu, Zhang, Liu, and Baran discloses The apparatus of claim 1, wherein … which style transfer is to be performed in the building model image. (Niu, Figure 1: “Content image”, “Style image”, and “Stylized image” for a building style transfer; Figure 2: “target area style transfer network model”, “Conditiona Transfer Network”, “Content image” as the building model image; “Style image” as a predefined user-style image, “Mask matrix” as the floor the mask image and the floor grid area; page 486, col. left, Abstract, “this paper proposes a deep learning-based style transfer framework for image target regions. Firstly, the content map is the main input, and the mask map generated by the image mask technology is used as the specific condition input. Then, the transfer network style transfer area of the specific condition input is only the target area, and the non-target area is not in the style transfer area. Finally, the transfer network style transfer input via specific conditions realizes the image style transfer of the target area.”). Note that: style transfer is performed in the target area in the building model image according to the “Mask matrix”. … the mask image corresponds to an image representing areas of the window and the wall on … (Liu, page 2301, Abstract, “we propose a deep learning based method for segmenting a facade into semantic categories ... We also propose a method to refine the segmentation results using bounding boxes generated by the Region Proposal Network”; page 2305, col. right, “The ECP dataset consists of 104 images of building facades. The dataset contains the following classes: {window, wall, balcony, door, shop, sky, chimney, roof}. All the images in the ECP dataset contains rectified and cropped facades of Haussmannian style buildings in Paris”; Figure 5: “Qualitative examples on the ECP and eTRIMS dataset”, “(b) bounding boxes” indicating windows, “(e) RPN refinement” (windows in red, walls in yellow, doors in orange, and etc.), and “(k) Our Result” (windows in blue and walls in red)). Note that: the mask image corresponds to a color-coded image representing areas of the window sand the walls. Regarding claim 3, the combination of Niu, Zhang, Liu, and Baran discloses The apparatus of claim 2, wherein the at least one program generates the floor grid area based on a position relationship of the window and the wall depending on a result of the segmentation into the window and the wall in order to represent an area between respective floors of a building. (Liu, page 2305, col. right, “The ECP dataset consists of 104 images of building facades. The dataset contains the following classes: {window, wall, balcony, door, shop, sky, chimney, roof}. All the images in the ECP dataset contains rectified and cropped facades of Haussmannian style buildings in Paris”; Figure 5: “Qualitative examples on the ECP and eTRIMS dataset”, “(b) bounding boxes” indicating windows, “(e) RPN refinement” (windows in red, walls in yellow, doors in orange, and etc.), and “(k) Our Result” (windows in blue and walls in red)). Note that: (1) using a deep-learning method, windows and walls are segmented out from a building façade image (the building model image); and (2) it obvious to one having ordinary skills in the art that: the combination of the rows of windows in red for different floors and the corresponding walls in yellow in Figure 5’s (h) can be regarded as a floor gird area based on the segmentation into the windows and the walls and their relative positions. Regarding claim 4, the combination of Niu, Zhang, Liu, and Baran discloses The apparatus of claim 3, wherein the at least one program generates the floor grid area based on minimum and maximum coordinate values forming a bounding box of a segmented window and the position relationship. (Liu, page 2301, Abstract, “we propose a deep learning based method for segmenting a facade into semantic categories ... We also propose a method to refine the segmentation results using bounding boxes generated by the Region Proposal Network”; page 2305, col. right, “The ECP dataset consists of 104 images of building facades. The dataset contains the following classes: {window, wall, balcony, door, shop, sky, chimney, roof}. All the images in the ECP dataset contains rectified and cropped facades of Haussmannian style buildings in Paris”; Figure 5: “Qualitative examples on the ECP and eTRIMS dataset”, “(b) bounding boxes” indicating windows, “(e) RPN refinement” (windows in red, walls in yellow, doors in orange, and etc.), and “(k) Our Result” (windows in blue and walls in red)). Note that: (1) using a deep-learning method, windows and walls are segmented out from a building façade image (the building model image), and windows segmentation results can be refined using bounding boxes; (2) a segmented window bounding box indicates minimum and maximum coordinate values for the bounding box in the building model image; and (3) it obvious to one having ordinary skills in the art that: the combination of the rows of windows in red based on their respective bounding boxes’ minimum and maximum coordinate values for different floors and the corresponding walls in yellow in Figure 5’s (h) can be regarded as a generated floor gird area by the corresponding program based on the segmentation into the window and the wall and their relative positions in the building . Regarding claim 5, the combination of Niu, Zhang, Liu, and Baran discloses The apparatus of claim 1, wherein the at least one program transfers styles of the window and the wall according to constraints of the window and the wall using a prestored style transfer deep-learning network. (Niu, Figure 1: “Content image”, “Style image”, and “Stylized image” for a building style transfer; Figure 2: “target area style transfer network model”, “Conditiona Transfer Network”, “Content image” as the building model image; “Style image” as a predefined user-style image, “Mask matrix” as the floor the mask image and the floor grid area; page 486, col. left, Abstract, “this paper proposes a deep learning-based style transfer framework for image target regions. Firstly, the content map is the main input, and the mask map generated by the image mask technology is used as the specific condition input. Then, the transfer network style transfer area of the specific condition input is only the target area, and the non-target area is not in the style transfer area. Finally, the transfer network style transfer input via specific conditions realizes the image style transfer of the target area.”). Note that: (1) the deep-learning based computing system or apparatus takes a content image (the image to be style-transferred), a mask image and area (target area), and a style image as input, and output a style transferred image with only the target area transferred; (2) the trained deep-leaning neural network can be regarded as a prestored style transfer deep-learning network; and (3) the windows and walls within the constraints of the mask image the floor grid area (target area; Mask matrix) are style-transferred. Claim 8 reciting “A method for transferring a building model texture style, performed by an apparatus for transferring a building model texture style, comprising:” is corresponding to the apparatus of claim 1. Therefore, claim 8 is rejected for the same rationale for claim 1. In addition, the combination of Niu, Zhang, Liu, and Baran disclose A method for transferring a building model texture style, performed by an apparatus for transferring a building model texture style, comprising: (Niu, “This article uses image segmentation method to achieve the segmentation of target area and non-target area.”; page 488, col. left, para. 8, “Hardware environment: The processor is Intel(R) Core(TM) i5-4210H CPU @2.90GHz, 2 cores, 4 logical processors. Software environment: Collaboratory's cloud GPU, Tesla Pl00 with 16g video memory, Tensorflow learning framework. Experiment”; Figure 1: “Content image”, “Style image”, and “Stylized image” for a building style transfer; Figure 2: “target area style transfer network model”, “Conditiona Transfer Network”, “Content image” as the building model image; “Style image” as a predefined user-style image, “Mask matrix” as the floor the mask image and the floor grid area; page 486, col. left, Abstract, “this paper proposes a deep learning-based style transfer framework for image target regions. Firstly, the content map is the main input, and the mask map generated by the image mask technology is used as the specific condition input. Then, the transfer network style transfer area of the specific condition input is only the target area, and the non-target area is not in the style transfer area. Finally, the transfer network style transfer input via specific conditions realizes the image style transfer of the target area.”). Note that: (1) the hardware and software form the computing apparatus in which memory store software to be executed on CPU and GPU; (2) the deep-learning based computing system or apparatus takes a content image (the image to be style-transferred), a mask image and area (target area), and a style image as input, and output a style transferred image with only the target area transferred; and (3) after the style transfer has been done, the building model image as the output can be regarded as the building model image with a style of which is transferred or updated. Claims 9-12 are corresponding to the apparatus of claims 2-5, respectively. Therefore, claims 9-12 are rejected for the same rationale for claims 2-5, respectively. Claims 6 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Niu, Zhang, Liu, Baran, and Archive_1 (ResNet, AlexNet, VGGNet, Inception: Understanding various architectures of Convolutional Networks, archive.org, https://web.archive.org/web/20230530075252/https://cv-tricks.com/cnn/understand-resnet-alexnet-vgg-inception/, hereinafter “Archive_1”). Regarding claim 6, the combination of Niu, Zhang, Liu, and Baran discloses The apparatus of claim 5, wherein the at least one program performs the style transfer using a style transfer deep-learning network configured with (Niu, Figure 2: the style transfer using a style transfer deep-learning network configured with “VGGNet”). However, the combination of Niu, Zhang, Liu, and Baran fails to disclose, but in the same art of neural network application, Archive_1 discloses ResNet (Archive_1, page 1, para. 1, “AlexNet, VGG, Inception, ResNet are some of the popular networks.”; page 5, para. 3, “Through the changes mentioned, ResNets were learned with network depth of as large as 152. It achieves better accuracy than VGGNet and GoogleNet while being computationally more efficient than VGGNet. ResNet-152 achieves 95.51 top-5 accuracies.”). Note that: (1) compared to VGGNet, ResNet is also a popular module and better than VGGNet in terms of accuracy and computing efficiency; and (2) VGGNet can be substituted by ResNet for better accuracy. The combination of Niu, Zhang, Liu, and Baran, and Archive_1, are in the same field of endeavor, namely Neural Network Application. Before the effective filing date of the claimed invention, it would have been obvious to apply understanding ResNet, VGG, and other modules, as taught by Archive_1 into the combination of the combination of Niu, Zhang, and Liu, and Baran. The motivation would have been “Through the changes mentioned, ResNets were learned with network depth of as large as 152. It achieves better accuracy than VGGNet and GoogleNet while being computationally more efficient than VGGNet” (Archive_1, page 5, para. 3). The suggestion for doing so would allow to use ResNet to substitute VGG in a style transfer neural network. Therefore, it would have been obvious to combine Niu, Zhang, Liu, Baran, and Archive_1. Claim 13 is corresponding to the apparatus of claim 6. Therefore, claim 13 is rejected for the same rationale for claim 6, respectively. Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Niu, Zhang, Liu, Baran, and Archive_1, and Huang et al. (Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization, arXiv.org, arXiv:1703.06868v2 [cs.CV] 30 Jul 2017, hereinafter “Huang”). Regarding claim 7, the combination of Niu, Zhang, Liu, Baran, and Archive_1 discloses The apparatus of claim 6, wherein the at least one program performs(Liu, Niu, Figure 1: “Content image”, “Style image”, and “Stylized image” for a building style transfer; Figure 2: “target area style transfer network model”, “Conditiona Transfer Network”, “Content image” as the building model image; “Style Image” as a predefined user-style image taken by “VGGNet” as an input, “Mask matrix” as the floor the mask image and the floor grid area; page 486, col. left, Abstract, “this paper proposes a deep learning-based style transfer framework for image target regions. Firstly, the content map is the main input, and the mask map generated by the image mask technology is used as the specific condition input. Then, the transfer network style transfer area of the specific condition input is only the target area, and the non-target area is not in the style transfer area. Finally, the transfer network style transfer input via specific conditions realizes the image style transfer of the target area.”). Note that: (1) a style desired by the user as a constraint to the style transfer deep-learning network can be mapped to “Style Image”; and (2) “Style Image” is also an input to the ResNet module or block that substitutes the original “VGGNet” in Figure 2. However, the combination of Niu, Zhang, Liu, Baran, and Archive_1 fails to disclose, Huang discloses … AdaIN normalization on a style image provided by the user and performs a concatenate operation … (Huang, page 1, Abstract, “In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features”; page 4, col. left, para. 2, “If IN normalizes the input to a single style specified by the affine parameters, is it possible to adapt it to arbitrarily given styles by using adaptive affine transformations? Here, we propose a simple extension to IN, which we call adaptive instance normalization (AdaIN). AdaIN receives a content input x and a style input y, and simply aligns the channelwise mean and variance of x to match those of y. Unlike BN, IN or CIN, AdaIN has no learnable affine parameters. Instead, it adaptively computes the affine parameters from the style input”). Note that: (1) AdaIN normalization can be used to applied to a style input y to align or normalize the mean and variance of the content features with those of the style features; and (2) AdaIN normalization can be put between the input style image and the ResNet block, i.e., performing a concatenate operation on the ResNet block. The combination of Niu, Zhang, Liu, Baran, and Archive_1, and Huang, are in the same field of endeavor, namely Neural Network Application. Before the effective filing date of the claimed invention, it would have been obvious to apply AdaIN to normalizing a style input image, as taught by Huang into the combination of the combination of Niu, Zhang, and Liu, Baran, and Archive_1. The motivation would have been “a simple yet effective approach that for the first time enables arbitrary style transfer in real-time” (Huang, page 1, Abstract). The suggestion for doing so would allow to use AdaIN to align and normalize the mean and variance of the content features with those of the style features. Therefore, it would have been obvious to combine Niu, Zhang, Liu, Baran, Archive_1, and Huang. Claim 14 is corresponding to the apparatus of claim 7. Therefore, claim 14 is rejected for the same rationale for claim 7, respectively. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIAO CHEN whose telephone number is (703)756-1199. The examiner can normally be reached M-F 8am-5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Biao Chen/ Patent Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 20, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602873
AUTOMATIC RETOPOLOGIZATION OF TEXTURED 3D MESHES
2y 5m to grant Granted Apr 14, 2026
Patent 12597149
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR NETWORK COMMUNICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12562138
METHOD AND SYSTEM FOR COMPENSATING ANTI-DIZZINESS PREDICTED IN ADVANCE
2y 5m to grant Granted Feb 24, 2026
Patent 12561897
COMPRESSED REPRESENTATIONS FOR APPEARANCE OF FIBER-BASED DIGITAL ASSETS
2y 5m to grant Granted Feb 24, 2026
Patent 12548129
APPARATUSES, METHODS AND COMPUTER PROGRAMMES FOR USE IN MODELLING IMAGES CAPTURED BY ANAMORPHIC LENSES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+26.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 32 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month