Prosecution Insights
Last updated: April 19, 2026
Application No. 18/228,127

GENERATING GEOMETRY AND TEXTURE FOR VOLUMETRIC VIDEO FROM 2D IMAGES WITH A LIMITED VIEWPOINT

Final Rejection §103
Filed
Jul 31, 2023
Examiner
WEI, XIAOMING
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Take-Two Interactive Software Inc.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
28 granted / 34 resolved
+20.4% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
83.6%
+43.6% vs TC avg
§102
4.4%
-35.6% vs TC avg
§112
2.2%
-37.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The office action is in response to Applicant’s amendment filed 10/21/2025 which has been entered and made of record. Claims 3-5, 9 and 11-13 have been amended. No claim has been newly added. Claims 1-16 are pending in the application. Applicant's amendments to the claims have overcome each and every objection previously set forth in the Non-Final Office Action mailed 04/22/2025. The 35 U.S.C. 101 rejection on claims 9-16 have been withdrawn. Response to Arguments Applicant’s arguments, filed 10/21/2025, with respect to the rejection(s) under 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant argues Lee disclose a 3D volumetric data with multiple layers, but does not teach the limitation of “the volumetric image front has a higher quality than the volumetric image back, and reducing the quality of said initial volumetric image by reducing a resolution of the volumetric image front to match a quality of the volumetric image back”. Examiner agrees Lee does not teach the limitation above. Applicant argues Lee And Ford In View Of Tokunaga Fail To Disclose Reducing A Resolution Of The Volumetric Image Front To Match A Quality Of The Volumetric Image Back Wherein Said Volumetric Image Back Generated From Portions Of Said Subject Not Visible In Said At Least One 2 Dimensional Image In The Manner Set Forth In Independent Claims 1 And 9. Examiner respectfully disagrees. First, claims 1 and 9 recites “reduce the quality of said initial volumetric image, said reducing comprising at least one of the following steps: reduce a resolution of the volumetric image front to match a quality of the volumetric image back; change texture of at least a part of said initial volumetric image; simplify a geometry of said initial volumetric image by changing at least one geometrical feature of said initial volumetric image; simplify a geometry of said initial volumetric image by reducing resolution of at least one feature of said initial volumetric image; or any combination thereof.”. Ford teaches the limitation of “reduce the quality of said initial volumetric image, said reducing comprising…… change texture of at least a part of said initial volumetric image” in paragraph [0083] “if it is determined that an area is not likely to be viewed from up close by a user walking through a model (e.g., a ceiling), then a resolution of the area can be reduced or can be selected such that the area will not be altered to a higher resolution.”, paragraph [0058] “generating a 3D model with textures at various resolutions.” And paragraph [0090] “some texture regions can correspond to a large region of mostly low detail and smaller regions of higher (fine) detail. For example, surface 432 can comprise a wall having mostly low detail and a region of high detail (e.g., high interest object 414).” Second, the claim language recites “reduce the quality of said initial volumetric image”. The initial volumetric image includes volumetric image front and volumetric image back. To reduce the quality of said initial volumetric image, low resolution textures can be mapped to either front image or back image. Lastly, in response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Conclusions: The rejections set in the previous Office Action are shown to have been proper, and the claims are rejected below. New citations and parenthetical remarks can be considered new grounds of rejection and such new grounds of rejection are necessitated by the Applicant's amendments to the claims. Therefore, the present Office Action is made final. Claim Objections Claims 10-16 objected to because of the following informalities: The pre-amble of claims 10-16 recites “the set of instructions”, these should be changed to “The non-transitory computer-readable medium” as amended to claim 9. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lee et al. (US 20130271449 A1), hereinafter as Lee, in view of Ford et al. (US 20180144535 A1), hereinafter as Ford, and further in view of NPL Tokunaga et al. (“Non-Photorealistic 3D Video-Avatar"), hereinafter as Tokunaga. Regarding claim 1, Lee teaches A method for generating a volumetric image of a subject (Lee paragraph [0009] “provided a method of generating three-dimensional (3D) volumetric data”) from at least one 2 dimensional image, said at least one 2 dimensional image having a limited number of viewpoints (Lee teaches limited viewpoints locations in Figure 2 and 15. Lee Figure 2 Image sensor 201 (one viewpoint location) and Figure 15, two color cameras 1502, teaching two viewpoints locations), …… comprising steps of: acquiring said at least one 2 dimensional image (Lee paragraph [0053] “When the image sensor 201 captures the object, a part of the object bearing an image formed on a sensor plane may be referred to as a visible object part, and a part of the object in which a self-occlusion of the object, or an occlusion caused by another object occurs, may be referred to as a hidden object part.”); generating an initial volumetric image from said at least one 2 dimensional image, said initial volumetric image having a volumetric image front and a volumetric image back, said volumetric image front generated from portions of said subject visible in said at least one 2 dimensional image, said volumetric image back generated from portions of said subject not visible in said at least one 2 dimensional image (Lee paragraph Figure 5, paragraph [0059-0061] “FIG. 5 illustrates a diagram of multi-layered body part ID images, and multi-layered depth images. In particular, an image 510 of FIG. 5 shows a depth image and object part identification information of a visible layer. Additionally, images 520 show depth images and object part identification information of invisible layers. And paragraph [0054] “three faces of a front view 202 of a regular hexahedron may be defined to be three visible object parts, and three faces of a rear view 203 of the regular hexahedron may be defined to be three hidden object parts. That is, the image sensor 201 captures visible object parts corresponding to the three sides of the cube which are visible to the image sensor 201 (e.g., the three faces are visible from the point of view of the image sensor). In contrast, the three sides of the cube which are hidden from the image sensor 201 (i.e., which are invisible to the image sensor 201) correspond to hidden object parts.”), …… thereby generating said volumetric image from said at least one 2 dimensional image (Lee paragraph [0150] “3D volumetric data of an object including a visible layer and an invisible layer may be generated.”). Lee fails to teach said volumetric image insertable into an environment, …… said volumetric image front having a higher quality than said volumetric image back; and reducing the quality of said initial volumetric image, said reducing comprising at least one of the following steps: reducing a resolution of the volumetric image front to match a quality of the volumetric image back; changing texture of at least a part of said initial volumetric image; simplifying a geometry of said initial volumetric image by changing at least one geometrical feature of said initial volumetric image; simplifying a geometry of said initial volumetric image by reducing resolution of at least one feature of said initial volumetric image; or any combination thereof; Ford teaches said volumetric image front having a higher quality than said volumetric image back (Ford teaches a high interest object with high detail, and distant object with low detail, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to substitute the high interest object with visible layer (volumetric image front), and the distant object with invisible layer (volumetric image back). Ford paragraph [0090] “surface 432 can comprise a wall having mostly low detail and a region of high detail (e.g., high interest object 414)” and paragraph [0077] “capturing device 412 can capture data corresponding to one or more views of surface 432 (e.g., a wall), surface 434 (e.g., a different wall), or objects within an environment, such as high interest object 414 and distant object 418. It is noted that high interest object 414 can represent an object within a frame (e.g., a photo, a poster, art work, and the like), text, people (e.g., living people, images of people, a face, etc.), or other object of high interest. Distant object 418 can represent any detectable object, such as furniture, living objects (e.g., plants, people, animals, etc.), or virtually any object. Distant object 418 can represent an object at a determined distance from a capturing point of capturing device 412.”); and reducing the quality of said initial volumetric image, said reducing comprising at least one of the following steps (Ford paragraph [0083] “resolution allocation component 320 can determine some regions are far away from user viewpoints when navigating the rendered model (e.g., potential viewpoints, probable viewpoints, etc.) and do not need high detail or resolution. For example, if it is determined that an area is not likely to be viewed from up close by a user walking through a model (e.g., a ceiling), then a resolution of the area can be reduced”): reducing a resolution of the volumetric image front to match a quality of the volumetric image back; changing texture of at least a part of said initial volumetric image; simplifying a geometry of said initial volumetric image by changing at least one geometrical feature of said initial volumetric image; simplifying a geometry of said initial volumetric image by reducing resolution of at least one feature of said initial volumetric image; or any combination thereof (Ford paragraph [0055] “3D modeling component 110 can map texture data of 3D imagery data onto 3D shapes or 3D meshes.” And paragraph [0058] “the altered textures or original textures can be utilized to facilitate generating a 3D model with textures at various resolutions.”); Lee and Ford are in the same field of endeavor, namely computer graphics, modeling virtual objects in a virtual environment. Ford teaches a multi resolution texture mapping method to assign low resolution texture to distant object to improve efficiency and user satisfaction (Ford paragraph [0045] “can facilitate improved 3D modeling systems, increase efficiency of 3D modeling systems, improve user satisfaction”).Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Ford with the method of Lee to improve efficiency and user satisfaction. Lee in view of Ford fail to teach said volumetric image insertable into an environment, Tokunaga teaches said volumetric image insertable into an environment (Tokunaga, Page 2, Figure 1, and right column, second paragraph "We have presented the current state of a system for the insertion of a non-photorealistic 3D video avatar in virtual environments, which can be applied in the development of AR games, as well as educational applications” and left column, last paragraph “Comparing the scenes rendered with (figure 1b) and without (figure 1a) our approach, it is possible to see that the rendered avatar in 1b matches more closely the whole virtual environment design concept. In figure 1a the avatar is too realistic compared to the environment around, creating an undesirable mismatch.”). Lee, Ford and Tokunaga are in the same field of endeavor, namely computer graphics, modeling virtual objects in a virtual environment. Tokunaga teaches inserting virtual avatars in VR environment and using cartoon-style rendering as a way to reduce resolution to solve the mismatch between avatar and virtual environment (Tokunaga Left Column, first paragraph "The expected result is a reduction in the visual and cognitive mismatch between player image and synthetic environment, allowing for a more immersive experience."). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Tokunaga with the method of Lee in view of Ford to improve the immersive experience with virtual environment. Regarding claim 2, Lee in view of Ford and Tokunaga teach the method of claim 1, The method of claim 1, and further teach additionally comprising a step of providing said texture as a pattern (Ford paragraph [0092] “A library can comprise a set of previously identified patterns or pre-defined patterns. For example, a 3D model can comprise a model of an architectural structure having multiple rooms. The rooms can have common patterns and once a common pattern is identified, the pattern can be reused.” And paragraph [0106] “regions of texture can be selected based on quality metrics of the regions or portions of the regions (e.g., identification of variance of visual detail), identifiable patterns (e.g., repeating patterns, semi-random patterns, random patterns, etc.) identified levels of interest associated with regions, etc.”). Regarding claim 3, Lee in view of Ford and Tokunaga teach the method of claim 2, The method of claim 2, and further teach additionally comprising a step of fixing said pattern (Ford [0106] “a system can select regions of texture based on identified architectural structures, objects, text, distance from a capturing point, distance from a predicted viewpoint, and the like.”), said fixing selected from a group consisting of fixing said pattern to a layer on a virtual camera, fixing said pattern to said initial volumetric image, fixing said pattern to a skeleton of the volumetric image, fixing said pattern to a center of mass of the volumetric image, or fixing said pattern to a fixed point in space (Ford paragraph [0093] “The identified repeating patterned textured can then be utilized to represent various regions of a 3D model and/or stored in a library of patterns. For example, a repeating patterned texture can be designated as a “tiled” texture for rendering in a tiled fashion on the model.”). Regarding claim 4, Lee in view of Ford and Tokunaga teach the method of claim 2, The method of claim 2, and further teach additionally comprising a step of selecting said texture as a pattern from a group consisting of a pattern of said environment, a proprietary pattern, a user-selected pattern, or a user-generated pattern (Ford paragraph [0096] “a user can provide input prior to rendering that indicates the model should be rendered utilizing or not utilizing resolution allocation techniques as described herein. It is noted that a user can selectively determine to utilize certain resolution allocation techniques (e.g., pattern identification, high interest regions, etc.).”). Regarding claim 5, Lee in view of Ford and Tokunaga teach the method of claim 2, The method of claim 2, and further teach additionally comprising a step of providing said texture as a pattern either changing over time or fixed over time (Ford paragraph [0092] “the residual texture can be artificially generated (e.g., based on a pseudo-random shading technique, algorithmically generated pseudo-random texture, etc.) at run time (e.g., rendering time) and combined with the patterned texture. In some instances, the residual texture can correspond to lighting variations and reflections on reflective or partially reflective surfaces.”). Regarding claim 6, Lee in view of Ford and Tokunaga teach the method of claim 1, The method of claim 1, and further teach additionally comprising generating said reducing of said quality (Ford paragraph [0083] “if it is determined that an area is not likely to be viewed from up close by a user walking through a model (e.g., a ceiling), then a resolution of the area can be reduced”) by a means comprising a member selected from a group consisting of: matching a volumetric image geometry style of said initial volumetric image to an environment geometry style of an environment; reducing said initial volumetric image to a skeleton plus an extent; reducing said initial volumetric image to a center of mass plus an extent; applying a pattern to said volumetric image back; and any combination thereof (Ford teaches determining reduced resolution for distant object, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to substitute the distant object with invisible layer of Lee. Fork further teaches a library of patterns and applying low resolution of textures to distant objects. Paragraph [0092] “resolution allocation component 320 can identify a pre-determined pattern based on a library of pre-determined patterns. A library can comprise a set of previously identified patterns or pre-defined patterns.” And paragraph [0096] “The user input can correspond to a user's desire to alter a resolution of a 3D model or a portion of the 3D model. For example, a user can view a rendering of a 3D model and can determine whether an area or object within the model should have an altered resolution (e.g., increased resolution, decreased resolution, etc.).”). Regarding claim 7, Lee in view of Ford and Tokunaga teach the method of claim 1, The method of claim 1, and further teach additionally comprising a step of selecting said higher quality comprising a member of a group consisting of a higher resolution, more detail, fewer artifacts, and any combination thereof (Ford teaches a high interest object with high detail, and distant object with low detail, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the high interest object with high detail with the visible layer of Lee. Ford paragraph [0090] “surface 432 can comprise a wall having mostly low detail and a region of high detail (e.g., high interest object 414)” and paragraph [0077] “capturing device 412 can capture data corresponding to one or more views of surface 432 (e.g., a wall), surface 434 (e.g., a different wall), or objects within an environment, such as high interest object 414 and distant object 418. It is noted that high interest object 414 can represent an object within a frame (e.g., a photo, a poster, art work, and the like), text, people (e.g., living people, images of people, a face, etc.), or other object of high interest. Distant object 418 can represent any detectable object, such as furniture, living objects (e.g., plants, people, animals, etc.), or virtually any object. Distant object 418 can represent an object at a determined distance from a capturing point of capturing device 412.”). Regarding claim 8, Lee in view of Ford and Tokunaga teach the method of claim 1, The method of claim 1, and further teach additionally comprising a step of selecting said at least a part of said initial volumetric image to be at least a part of said volumetric image back (Ford paragraph [0082] “an area of a surface (e.g., distant object 418 or a ceiling) which is only ever seem from far away may have limited resolution due to distance, and a patch of surface which is only seen at a very shallow angle (e.g., portion of surface 434) may only have limited resolution available as measured in pixels per square meter of surface area……This resolution reduction may be achieved by reducing the resolution at which textures in the region are stored or selecting low resolution images from a low resolution scan.”). Regarding claim 9, Lee teaches A non-transitory computer-readable medium comprising a set of instructions that, when executed by a computing device, are configured to cause the computing device to generate a volumetric image of a subject (Lee paragraph [0153] “The method of generating 3D volumetric data according to the above-described example embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations embodied by a computer.”) from at least one 2 dimensional image, said at least one 2 dimensional image having a limited number of viewpoints (Lee teaches limited viewpoints locations in Figure 2 and 15. Lee Figure 2 Image sensor 201 (one viewpoint location) and Figure 15, two color cameras 1502, teaching two viewpoints locations), …… said instructions comprising steps configured to: acquire said at least one 2 dimensional image (Lee paragraph [0053] “When the image sensor 201 captures the object, a part of the object bearing an image formed on a sensor plane may be referred to as a visible object part, and a part of the object in which a self-occlusion of the object, or an occlusion caused by another object occurs, may be referred to as a hidden object part.”); generate an initial volumetric image from said at least one 2 dimensional image, said initial volumetric image having a volumetric image front and a volumetric image back, said volumetric image front generated from portions of said subject visible in said at least one 2 dimensional image, said volumetric image back generated from portions of said subject not visible in said at least one 2 dimensional image (Lee paragraph Figure 5, paragraph [0059-0061] “FIG. 5 illustrates a diagram of multi-layered body part ID images, and multi-layered depth images. In particular, an image 510 of FIG. 5 shows a depth image and object part identification information of a visible layer. Additionally, images 520 show depth images and object part identification information of invisible layers. And paragraph [0054] “three faces of a front view 202 of a regular hexahedron may be defined to be three visible object parts, and three faces of a rear view 203 of the regular hexahedron may be defined to be three hidden object parts. That is, the image sensor 201 captures visible object parts corresponding to the three sides of the cube which are visible to the image sensor 201 (e.g., the three faces are visible from the point of view of the image sensor). In contrast, the three sides of the cube which are hidden from the image sensor 201 (i.e., which are invisible to the image sensor 201) correspond to hidden object parts.”), Lee fails to teach said volumetric image insertable into an environment…… said volumetric image front having a higher quality than said volumetric image back; and reduce the quality of said initial volumetric image, said reducing comprising at least one of the following steps: reduce a resolution of the volumetric image front to match a quality of the volumetric image back; change texture of at least a part of said initial volumetric image; simplify a geometry of said initial volumetric image by changing at least one geometrical feature of said initial volumetric image; simplify a geometry of said initial volumetric image by reducing resolution of at least one feature of said initial volumetric image; or any combination thereof. Ford teaches said volumetric image front having a higher quality than said volumetric image back (Ford teaches a high interest object with high detail, and distant object with low detail, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to substitute the high interest object with visible layer (volumetric image front), and the distant object with invisible layer (volumetric image back). Ford paragraph [0090] “surface 432 can comprise a wall having mostly low detail and a region of high detail (e.g., high interest object 414)” and paragraph [0077] “capturing device 412 can capture data corresponding to one or more views of surface 432 (e.g., a wall), surface 434 (e.g., a different wall), or objects within an environment, such as high interest object 414 and distant object 418. It is noted that high interest object 414 can represent an object within a frame (e.g., a photo, a poster, art work, and the like), text, people (e.g., living people, images of people, a face, etc.), or other object of high interest. Distant object 418 can represent any detectable object, such as furniture, living objects (e.g., plants, people, animals, etc.), or virtually any object. Distant object 418 can represent an object at a determined distance from a capturing point of capturing device 412.”); and reduce the quality of said initial volumetric image, said reducing comprising at least one of the following steps (Ford paragraph [0083] “resolution allocation component 320 can determine some regions are far away from user viewpoints when navigating the rendered model (e.g., potential viewpoints, probable viewpoints, etc.) and do not need high detail or resolution. For example, if it is determined that an area is not likely to be viewed from up close by a user walking through a model (e.g., a ceiling), then a resolution of the area can be reduced”): reduce a resolution of the volumetric image front to match a quality of the volumetric image back; change texture of at least a part of said initial volumetric image; simplify a geometry of said initial volumetric image by changing at least one geometrical feature of said initial volumetric image; simplify a geometry of said initial volumetric image by reducing resolution of at least one feature of said initial volumetric image; or any combination thereof (Ford paragraph [0055] “3D modeling component 110 can map texture data of 3D imagery data onto 3D shapes or 3D meshes.” And paragraph [0058] “the altered textures or original textures can be utilized to facilitate generating a 3D model with textures at various resolutions.”). Lee and Ford are in the same field of endeavor, namely computer graphics, modeling virtual objects in a virtual environment. Ford teaches a multi resolution texture mapping method to assign low resolution texture to distant object to improve efficiency and user satisfaction (Ford paragraph [0045] “can facilitate improved 3D modeling systems, increase efficiency of 3D modeling systems, improve user satisfaction”).Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Ford with the method of Lee to improve efficiency and user satisfaction. Lee in view of Ford fail to teach said volumetric image insertable into an environment, Tokunaga teaches said volumetric image insertable into an environment (Tokunaga, Page 2, Figure 1, and right column, second paragraph "We have presented the current state of a system for the insertion of a non-photorealistic 3D video avatar in virtual environments, which can be applied in the development of AR games, as well as educational applications” and left column, last paragraph “Comparing the scenes rendered with (figure 1b) and without (figure 1a) our approach, it is possible to see that the rendered avatar in 1b matches more closely the whole virtual environment design concept. In figure 1a the avatar is too realistic compared to the environment around, creating an undesirable mismatch.”). Lee, Ford and Tokunaga are in the same field of endeavor, namely computer graphics, modeling virtual objects in a virtual environment. Tokunaga teaches inserting virtual avatars in VR environment and using cartoon-style rendering as a way to reduce resolution to solve the mismatch between avatar and virtual environment (Tokunaga Left Column, first paragraph "The expected result is a reduction in the visual and cognitive mismatch between player image and synthetic environment, allowing for a more immersive experience."). Therefore, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Tokunaga with the method of Lee in view of Ford to improve the immersive experience with virtual environment. Regarding claim 10, Lee in view of Ford and Tokunaga teach claim 9, The set of instructions of claim 9, and further teach wherein said texture is provided as a pattern (Ford paragraph [0092] “A library can comprise a set of previously identified patterns or pre-defined patterns. For example, a 3D model can comprise a model of an architectural structure having multiple rooms. The rooms can have common patterns and once a common pattern is identified, the pattern can be reused.” And paragraph [0106] “regions of texture can be selected based on quality metrics of the regions or portions of the regions (e.g., identification of variance of visual detail), identifiable patterns (e.g., repeating patterns, semi-random patterns, random patterns, etc.) identified levels of interest associated with regions, etc.”). Regarding claim 11, Lee in view of Ford and Tokunaga teach claim 10, The set of instructions of claim 10, and further teach wherein said pattern is fixed (Ford [0106] “a system can select regions of texture based on identified architectural structures, objects, text, distance from a capturing point, distance from a predicted viewpoint, and the like.”), said fixing selected from a group consisting of fixed to a layer on a virtual camera, fixed to said initial volumetric image, fixed to a skeleton of the volumetric image, fixed to a center of mass of the volumetric image, or fixed to a fixed point in space (Ford paragraph [0093] “The identified repeating patterned textured can then be utilized to represent various regions of a 3D model and/or stored in a library of patterns. For example, a repeating patterned texture can be designated as a “tiled” texture for rendering in a tiled fashion on the model.”). Regarding claim 12, Lee in view of Ford and Tokunaga teach claim 10, The set of instructions of claim 10, and further teach wherein said texture as a pattern is selected from a group consisting of a pattern of said environment, a proprietary pattern, a user-selected pattern, or a user- generated pattern (Ford paragraph [0096] “a user can provide input prior to rendering that indicates the model should be rendered utilizing or not utilizing resolution allocation techniques as described herein. It is noted that a user can selectively determine to utilize certain resolution allocation techniques (e.g., pattern identification, high interest regions, etc.).”). Regarding claim 13, Lee in view of Ford and Tokunaga teach claim 10, The set of instructions of claim 10, and further teach wherein said texture as a pattern is provided either changing over time or fixed over time (Ford paragraph [0092] “the residual texture can be artificially generated (e.g., based on a pseudo-random shading technique, algorithmically generated pseudo-random texture, etc.) at run time (e.g., rendering time) and combined with the patterned texture. In some instances, the residual texture can correspond to lighting variations and reflections on reflective or partially reflective surfaces.”). Regarding claim 14, Lee in view of Ford and Tokunaga teach claim 9, The set of instructions of claim 9, and further teach wherein said reducing of said quality (Ford paragraph [0083] “if it is determined that an area is not likely to be viewed from up close by a user walking through a model (e.g., a ceiling), then a resolution of the area can be reduced”) is generated by a means comprising a member selected from a group consisting of: matching a volumetric image geometry style of said initial volumetric image to an environment geometry style of an environment; reducing said initial volumetric image to a skeleton plus an extent; reducing said initial volumetric image to a center of mass plus an extent; applying a pattern to said volumetric image back; and any combination thereof (Ford teaches determining reduced resolution for distant object, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to substitute the distant object with invisible layer of Lee. Fork further teaches a library of patterns and applying low resolution of textures to distant objects. Paragraph [0092] “resolution allocation component 320 can identify a pre-determined pattern based on a library of pre-determined patterns. A library can comprise a set of previously identified patterns or pre-defined patterns.” And paragraph [0096] “The user input can correspond to a user's desire to alter a resolution of a 3D model or a portion of the 3D model. For example, a user can view a rendering of a 3D model and can determine whether an area or object within the model should have an altered resolution (e.g., increased resolution, decreased resolution, etc.).”). Regarding claim 15, Lee in view of Ford and Tokunaga teach claim 9, The set of instructions of claim 9, and further teach wherein said higher quality comprises a member selected from a group consisting of a higher resolution, more detail, fewer artifacts, and any combination thereof (Ford teaches a high interest object with high detail, and distant object with low detail, it would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the high interest object with high detail with the visible layer of Lee. Ford paragraph [0090] “surface 432 can comprise a wall having mostly low detail and a region of high detail (e.g., high interest object 414)” and paragraph [0077] “capturing device 412 can capture data corresponding to one or more views of surface 432 (e.g., a wall), surface 434 (e.g., a different wall), or objects within an environment, such as high interest object 414 and distant object 418. It is noted that high interest object 414 can represent an object within a frame (e.g., a photo, a poster, art work, and the like), text, people (e.g., living people, images of people, a face, etc.), or other object of high interest. Distant object 418 can represent any detectable object, such as furniture, living objects (e.g., plants, people, animals, etc.), or virtually any object. Distant object 418 can represent an object at a determined distance from a capturing point of capturing device 412.”). Regarding claim 16, Lee in view of Ford and Tokunaga teach claim 9, The set of instructions of claim 9, and further teach wherein said at least a part of said initial volumetric image is selected to be at least a part of said volumetric image back (Ford paragraph [0082] “an area of a surface (e.g., distant object 418 or a ceiling) which is only ever seem from far away may have limited resolution due to distance, and a patch of surface which is only seen at a very shallow angle (e.g., portion of surface 434) may only have limited resolution available as measured in pixels per square meter of surface area……This resolution reduction may be achieved by reducing the resolution at which textures in the region are stored or selecting low resolution images from a low resolution scan.”). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOMING WEI whose telephone number is (571)272-3831. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /XIAOMING WEI/ Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 31, 2023
Application Filed
Apr 18, 2025
Non-Final Rejection — §103
Oct 21, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603064
CIRCUIT AND METHOD FOR VIDEO DATA CONVERSION AND DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12597246
METHOD AND APPARATUS FOR GENERATING ADVERSARIAL PATCH
2y 5m to grant Granted Apr 07, 2026
Patent 12597175
Avatar Creation From Natural Language Description
2y 5m to grant Granted Apr 07, 2026
Patent 12586280
TECHNIQUES FOR GENERATING DUBBED MEDIA CONTENT ITEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12586318
METHOD AND APPARATUS FOR LABELING ROAD ELEMENT, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+26.1%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month