Prosecution Insights
Last updated: April 18, 2026
Application No. 18/576,540

METHOD FOR ADDING TEXT ON 3-DIMENSIONAL MODEL AND 3-DIMENSIONAL MODEL PROCESSING APPARATUS

Non-Final OA §103§112
Filed
Jan 04, 2024
Examiner
AHMAD, NAUMAN UDDIN
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Medit Corp.
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
28 granted / 36 resolved
+15.8% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to Applicant’s amendment filed 03/17/2026 which has been entered and made of record. Claims 1, 4, 7. 9, 11, 14, 17, 18, and 20 have been amended. No claim has been cancelled or newly added. Claims 1-20 are pending in the application. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument (due to applicant’s arguments directed to newly amend limitation(s) which is addressed by new prior art presented in this Office Action). Claim Rejections - 35 USC § 112 Previous 35 U.S.C. 112(b) rejection for Claim 4-8, and 14-17 have been withdrawn. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 9-10, 11, 13, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (U.S. Patent Application Publication No. 2013/0002656), hereinafter referenced as Zhang, in view of Sugano et al. (U.S. Patent Application Publication No. 2020/0410754), hereinafter referenced as Sugano, Jaeger (U.S. Patent Application Publication No. 2004/0027383), hereinafter referenced as Jaeger and Gabriel (ProBuilder Tutorial 5: Texturing Part I - Materials and Vertex Colors (v2.3, Unity)) [https://www.youtube.com/watch?v=m085rEQmVP8] , hereinafter referenced as Gabriel. Regarding claim 1, Zhang teaches method of adding text on a three-dimensional model representing an object, the method comprising: (paragraph 2 teaches "a system and method for combining text in a three-dimensional (3D) manner with associated 3D content.") obtaining three-dimensional text data corresponding to at least one character (paragraph 11 teaches "includes receiving both a 3D image content including at least one 3D image and text associated with the at least one 3D image" and paragraph 47 teaches "ultimate placement of the 3D text in the 3D content."); 3D text in 3D content explicitly shows the text received/obtained is 3D; displaying the three-dimensional text data and the three-dimensional model on a screen (paragraph 68 teaches "displaying text with 3D content"); this shows both text and 3d content/model displayed on screen; However, Zhang fails to explicitly teach determining whether the three-dimensional text data and the three-dimensional model are combinable with each other based on whether the three-dimensional text data with a certain offset applied thereto and the three-dimensional model intersect each other; even though paragraph 73 teaches "the 3D text such as a subtitle or caption is then generated and Positioned for display with the 3D image content using the parallax value determined"; and positioned using parallax value shows offset and one of ordinary skill in the art would understand that the determination of the offset ensuring text doesn’t go off-screen or off the 3D model would need to be made or some constraints in place so that the text can accurately be displayed in correlation to the 3D model. However, Sugano explicitly teaches determining whether the three-dimensional text data and the three-dimensional model are combinable with each other based on whether the three-dimensional text data with a certain offset applied thereto and the three-dimensional model intersect each other (Sugano, paragraph 27 teaches "and a flag indicating that the 3D models of the respective time points do not interfere with each other,"); this shows whether the two 3D models (text and model/content from Zhang) are combinable based on the flag indicating whether the two 3D models interfere/intersect with each other and this would account for the positioning from the parallax (offset) of Zhang. Sugano is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of indicating whether or not 3D models/objects intersect/interfere with each other. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Zhang’s invention with the determining combinability techniques of Sugano to ensure it is possible to reduce the amount of transmission data (Sugano, paragraph 181). This would be done by first checking by using a flag whether objects interfere prior to displaying them. However, the combination of Zhang and Sugano fails to teach wherein when the three-dimensional text data with the certain offset applied thereto intersects a surface of the three-dimensional model, the three-dimensional text data and the three- dimensional model are determined to be combinable, and when the three-dimensional text data with the certain offset applied thereto does not intersect the surface of the three-dimensional model, the three-dimensional text data and the three-dimensional model are determined to be non-combinable. However, Jaeger teaches wherein when the three-dimensional text data with the certain offset applied thereto intersects a surface of the three-dimensional model, (Jaeger, paragraph 28 teaches “user may then cause the switch 21 and star 23 to overlap or intersect or otherwise be combined”); this shows two 3D models (one of which would be the 3D text data from above when viewed in combination) intersecting and the offset is the movement; the three-dimensional text data and the three- dimensional model are determined to be combinable, (Jaeger, paragraph 28 teaches “click and drag on the star 23 to translate it to be superimposed on the switch 21…mere superposition of the two onscreen objects may be a sufficient action to cause the two objects to be glued together and combined as described below”); this shows when intersection occurs, then the two 3D models are combined and in order for them to be combined, they must first be determined to be combinable; and when the three-dimensional text data with the certain offset applied thereto does not intersect the surface of the three-dimensional model, the three-dimensional text data and the three-dimensional model are determined to be non-combinable (Jaeger, paragraph 32 teaches “combination may be made explicitly by recourse to the Info Canvas 46 and selection of the "Glue" entry, or may be made implicitly by the context of the action (one assigned-to object being dragged over another assigned-to object)”); if combination/combining (also determination of such for it to exist) is made by context of action like dragging one object/model over another, then that means if the objects/models aren’t dragged (thus don’t intersect), they would not be combined and would be determined as non-combinable. Jaeger is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of combining virtual objects according to intersection. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify combination of Zhang and Sugano with the combining techniques of Jaeger to combine any two onscreen objects where the combination creates a beneficial joining of characteristics of one object with another (Jaeger, paragraph 10). Beneficial joining of characteristics ensures a better user experience when the models are combined. However, the combination of Zhang, Sugano, and Jaeger fails to explicitly teach displaying a first portion of the three-dimensional text data determined to be non- combinable with the three-dimensional model differently from a second portion of the three- dimensional text data determined to be combinable with the three-dimensional model (although, Jaeger, paragraph 16 teaches “graphic or photo object over a switch object… When the switch or overlying object is touched or clicked, the switch changes state and alters its color or brightness to indicate actuation”); and this shows that when a user drags in Jaeger, the second portion of 3D data (which is clicked and dragged thus combinable) would be determined differently by different color in comparison to (first portion) data that isn’t clicked, dragged or combined (thus remains same color). However, Gabriel explicitly teaches displaying a first portion of the three-dimensional text data determined to be non- combinable with the three-dimensional model differently from a second portion of the three- dimensional text data determined to be combinable with the three-dimensional model (Gabriel, 3:24-3:50 [see fig. 3 of this action] teaches selecting certain faces and applying tiles to those faces selected); this shows displaying a first portion of (unselected) 3D data that is non-combinable differently (by using a gray color) than a second portion of (selected) 3D data that is combinable and displayed using blue color, the tiles selected here act as 3D model since they’re combined to the selected/second portion, and the 3D data would be 3D text data when viewed in combination with the Zhang reference above. Gabriel is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of combining 3D data and models using selection and varying display of different elements. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Zhang, Sugano, and Jaeger with the selection and varying display techniques of Gabriel to ensure a very quick way to have a nice large list of materials and apply them quickly (Gabriel, 6:17-6:23). This would be done due to the selection and varying display techniques since a user would know which places are combinable and also would ensure a more engaging program due to the variety of applications of material in the scene. Regarding claim 3, the combination of Zhang, Sugano, Jaeger and Gabriel teaches further comprising determining a position of the three-dimensional text data based on a user input (Zhang, paragraph 47 teaches "meets the producer/user requirements for ultimate placement of the 3D text in the 3D content"); user requirements are obtained using user input. Regarding claim 9, the combination of Zhang, Sugano, Jaeger and Gabriel teaches wherein the displaying the first portion differently from the second portion comprises displaying at least one of a color, a shape, or brightness of the first portion of the three-dimensional text data differently from that of the second portion of the three-dimensional text data (Sugano, paragraph 177 teaches "transmitted together with a flag indicating the subjects do not interfere with each other in a 3-dimensional space" and paragraph 179 teaches "when the depth information of each object is reprojected to a camera, it is possible to identify a shape photographed"); this shows identifying/determining shape of 3D object (text and content/model from Zhang) and is based on the flag that determines if objects are combinable (since comes in a step after). Additionally, Gabriel, 3:24-3:50 [see fig. 3 of this action] teaches selected/combinable (second portion of data) being blue color whereas the unselected/non-combinable (first portion of data) is grey which shows these two differ by displaying color of each portion. Lastly, Jaeger, paragraph 16 teaches “graphic or photo object over a switch object… When the switch or overlying object is touched or clicked, the switch changes state and alters its color or brightness to indicate actuation”); this shows brightness among different portions of data would also differ for clicked (to drag thus combinable) data and unclicked/non-combinable data. The same motivations used in claim 1 apply here in claim 9. Regarding claim 10, the combination of Zhang, Sugano, Jaeger and Gabriel teaches further comprising obtaining a three-dimensional model with text added thereon by combining the three-dimensional text data with the certain offset applied thereto on the three-dimensional model or by deleting data corresponding to the three-dimensional text data with the certain offset applied thereto from the three-dimensional model (Zhang, paragraph 32 teaches "two types of text that can be added to content video in accordance with embodiments of the present invention" and paragraph 10 teaches "determining parallax information from the 3D content and by using such parallax information together with one or more requirements supplied by a user or producer for best positioning text in a 3D manner into associated 3D content."); this shows the adding of the text to the 3D content/model of the video (to obtain resulting 3D model) and the parallax information used alongside requirements to position the text shows it would be done with the offset applied. Regarding claim 11, the apparatus claim 11 recites similar limitations as method claim 1, and thus is rejected under similar rationale. In addition, Zhang, fig. 1 teaches computer/apparatus 22 with processor 38 and display 36. Regarding claim 13, the apparatus claim 13 recites similar limitations as method claim 3, and thus is rejected under similar rationale. Regarding claim 18, the apparatus claim 18 recites similar limitations as method claim 9, and thus is rejected under similar rationale. Regarding claim 19, the apparatus claim 19 recites similar limitations as method claim 10, and thus is rejected under similar rationale. Regarding claim 20, the computer-readable recording medium claim 20 recites similar limitations as method claim 1, and thus is rejected under similar rationale. In addition, Zhang, paragraph 24 teaches "various processes which may be substantially represented in computer readable media and so executed by a computer or processor". Claim(s) 2, 4, 7-8, 12, 14, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Zhang, Sugano, Jaeger and Gabriel as applied to claim 1 and 11 above, and further in view of Shimizu et al. (U.S. Patent No. 6867787), hereinafter referenced as Shimizu. Regarding claim 2, the combination of Zhang, Sugano, Jaeger and Gabriel teaches obtaining two-dimensional mesh data by connecting a plurality of first vertexes included in the contour data (Sugano, paragraph 115 teaches "2-dimensional image data and the depth image data based on the viewpoints of the respective image capturing devices and the parameters of the respective image capturing devices to create a mesh" and paragraph 156 teaches "geometric information (geometry) indicating 3-dimensional positions of respective points (vertices) that form the created mesh and connections (polygons) of the respective points and 2-dimensional image data of the mesh as 3-dimensional data of the subject"); mesh based on 2D image data of the image would be 2D mesh (even with depth which can act as per-vertex attributes such as color or texture coordinates) and being based on the geometry, vertices and connected polygons shows connecting vertices included in the contour data (which can also be of the polygon model described in Shimizu below); three-dimensional mesh (Sugano, paragraph 179 teaches " (a visual volume intersection method) generates a 3D object (a mesh)"); 3D object as a mesh shows 3D mesh data. However, the combination of Zhang, Sugano, Jaeger and Gabriel fails to teach wherein the obtaining of the three-dimensional text data comprises: receiving a user input about the at least one character; obtaining contour data corresponding to the at least one character; and obtaining, as the three-dimensional text data, three-dimensional mesh data including a first surface including the two-dimensional mesh data, and a second surface spaced apart from the first surface by a certain depth. However, Shimizu teaches wherein the obtaining of the three-dimensional text data comprises: receiving a user input about the at least one character (Shimizu, col. 1, lines 43-45 teach "character generator which analyses the form of a character inputted by an operator via a keyboard"); obtaining contour data corresponding to the at least one character (Shimizu, col. 1, lines 45-46 teach "generates a polygon model corresponding to the inputted character"); polygon model shows contour data being obtained; and obtaining, as the three-dimensional text data, three-dimensional mesh data including a first surface including the two-dimensional mesh data, and a second surface spaced apart from the first surface by a certain depth (Shimizu, fig. 21 shows obtained 3D text with multiple surfaces spaced out by depth, col. 13, line 33 teaches "front face direction of a face depth polygon" and col. 13, lines 37-41 teach "subtracting the length of depth from all the vertex data as the Z coordinate value of a back surface polygon to form a face back surface polygon having the normal vector of -Z direction."); 3D mesh from Sugano (formed from 2D mesh data) would be the text data here with the front/first and second/back surfaces spread by the depth mentioned and shown. Shimizu is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of 3D text with multiple surfaces and from user input. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Zhang, Sugano, Jaeger and Gabriel with the user input and obtainment of 3D text techniques of Shimizu for the purpose of improving efficiency of an editing operation by gathering or classifying (Shimizu, col. 9, lines 8-10) and so respectively independent space areas can be easily formed in accordance with the request of the user. Accordingly, the maneuverability of the user can be more improved than before (Shimizu, col. 18, lines 21-24). This means a more user-friendly design. Regarding claim 4, the combination of Zhang, Sugano, Jaeger, Gabriel and Shimizu teaches further comprising: receiving a user input for dragging the three-dimensional text data displayed on the screen to a certain position (Shimizu, col. 6, lines 63-67 teach "when the user set a mouse cursor on the selected group and moves it while pressing (referred it to as a drag operation, hereinafter), the characters can be moved in parallel vertically (Y-axis direction) and horizontally (X-axis direction) in the virtual three-dimensional space area."); this shows dragging text data to move to desired/certain positions; determining a reference point based on the certain position on the screen (Shimizu, col. 7, lines 10-11 teach "Next, an operation for changing the display positions of the characters inputted to the scene window 62 by a rotating movement" and col. 7, lines 19-22 teach "when the user clicks a rotation axis switch 69, any one of the X-axis, the Y-axis and the Z-axis is selected as a coordinate axis to be used as a central axis for a rotational moving operation."); since rotation operation is after the drag, it is based on the certain position the drag moved to and the reference point here is the point on the rotational axis (pivot point) in correlation to that certain position; and determining a position of the three-dimensional text data based on a distance from the reference point to the surface of the three-dimensional model (Shimizu, col. 7, lines 24-27 teach "the rotating movement can be performed in the virtual three-dimensional space area. In such a rotational movement operation, as well as the above described parallel movement operation, a changed result is displayed"); changed result after rotation is the determined position of the 3D text and this would be based on the distance from the pivot/rotation/reference point and 3D model of Zhang since the 3D text sits at a distance from the 3D model in Zhang, thus rotation would account for it to avoid inconsistencies and inaccuracies. The same motivations used in claim 2 apply here in claim 4. Regarding claim 7, the combination of Zhang, Sugano, Jaeger, Gabriel and Shimizu teaches wherein the determining of whether the three- dimensional text data and the three-dimensional model are combinable with each other comprises: moving a second vertex included in a second surface of the three-dimensional text data by the certain offset (Zhang, paragraph 11 teaches "determining a position for the text in the second view, wherein the position in the second view is offset relative to the position in the first view of the corresponding 3D image by an amount based, at least in part, on the parallax information"); second view includes second surface and vertex and the offset would also lead to a moved/changed vertex; determining a line connecting a first vertex of a first surface of the three- dimensional text data with the moved second vertex (Shimizu, col. 12, lines 53-55 teach "notices a desired line segment out of the line segments obtained by connecting the respective vertexes together"); respective vertexes include first/front face and second/back face vertexes including the moved one from Zhang; and determining whether the line and the surface of the three-dimensional model intersect each other (Shimizu, col. 12, lines 57-58 teach "decides whether or not the noticed line segment intersects all other line segments"); all other line segments would form a surface of the 3D model since they connect respective vertexes. The same motivations used in claim 2 apply here in claim 7. Regarding claim 8, the combination of Zhang, Sugano, Jaeger, Gabriel and Shimizu teaches wherein the determining of whether the three- dimensional text data and the three-dimensional model are combinable with each other further comprises determining that the three-dimensional text data and the three- dimensional model are combinable with each other (Sugano, paragraph 27 teaches "and a flag indicating that the 3D models of the respective time points do not interfere with each other,"); this shows whether the two 3D models (text and model/content from Zhang) are combinable based on the flag indicating whether the two 3D models interfere/intersect with each other; when a number of lines intersecting the surface of the three-dimensional model among lines connecting first vertexes of the first surface of the three-dimensional text data with second vertexes moved by the certain offset is greater than or equal to a reference number (Zhang paragraph 11 teaches "determining a position for the text in the second view, wherein the position in the second view is offset relative to the position in the first view of the corresponding 3D image by an amount based, at least in part, on the parallax information" and Shimizu, col. 12, lines 57-58 teach "decides whether or not the noticed line segment intersects all other line segments"); reference number can be 0 and when lines intersect from Shimizu that would be at least one line intersecting which is greater than 0 (reference number) and the second vertexes moved by offset is shown by Zhang here. The same motivations used in claim 2 apply here in claim 8. Regarding claim 12, the apparatus claim 12 recites similar limitations as method claim 2, and thus is rejected under similar rationale. Regarding claim 14, the apparatus claim 14 recites similar limitations as method claim 4, and thus is rejected under similar rationale. Regarding claim 17, the combination of Zhang, Sugano, Jaeger, Gabriel and Shimizu teaches wherein the at least one processor is further configured to: move second vertexes included in a second surface of the three-dimensional text data by the certain offset (Zhang, paragraph 11 teaches "determining a position for the text in the second view, wherein the position in the second view is offset relative to the position in the first view of the corresponding 3D image by an amount based, at least in part, on the parallax information"); second view includes second surface and vertex and the offset would also lead to a moved/changed vertex; and determine that the three-dimensional text data and the three-dimensional model are combinable with each other (Sugano, paragraph 27 teaches "and a flag indicating that the 3D models of the respective time points do not interfere with each other,"); this shows whether the two 3D models (text and model/content from Zhang) are combinable based on the flag indicating whether the two 3D models interfere/intersect with each other; when a number of lines intersecting the surface of the three- dimensional model among lines connecting first vertexes of a first surface of the three- dimensional text data with the moved second vertexes is greater than or equal to a reference number (Zhang paragraph 11 teaches "determining a position for the text in the second view, wherein the position in the second view is offset relative to the position in the first view of the corresponding 3D image by an amount based, at least in part, on the parallax information" and Shimizu, col. 12, lines 57-58 teach "decides whether or not the noticed line segment intersects all other line segments"); reference number can be 0 and when lines intersect from Shimizu that would be at least one line intersecting which is greater than 0 (reference number) and the second vertexes moved by offset is shown by Zhang here. The same motivations used in claim 2 apply here in claim 17. Claim(s) 5-6 and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Zhang, Sugano, Jaeger, Gabriel and Shimizu as applied to claims 4 and 14 above, and further in view of Baron Grutter DDS (3DBuilder - Adding Text or Logo to an Object) [https://www.youtube.com/watch?v=8dBiETNLXvU], hereinafter referenced as Grutter. Regarding claim 5, the combination of Zhang, Sugano, Jaeger, Gabriel and Shimizu fails to teach wherein the determining of the position of the three-dimensional text data comprises determining the position of the three-dimensional text data at which a first surface and a second surface of the three-dimensional text data are located outside the three-dimensional model and the distance from the reference point to the surface of the three-dimensional model is minimum. However, Grutter teaches wherein the determining of the position of the three-dimensional text data comprises determining the position of the three-dimensional text data at which a first surface and a second surface of the three-dimensional text data are located outside the three-dimensional model and the distance from the reference point to the surface of the three-dimensional model is minimum (Grutter, 4:20-4:37 teaches projecting 3D text outwards of the 3D model [see fig. 1 of this action]); this shows all surfaces (inclusive of first and second) of 3D text outside the 3D model and the distance from the reference point of Shimizu to the surface of the 3D model would be minimum for the text because the text is flush to the 3D surface and the rotation is performed on the pivot point of any axes which includes the axis closest to the 3D text and model. Grutter is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of 3D text combined with 3D model for 3D printing in the medical field. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Zhang, Sugano, Jaeger, Gabriel and Shimizu with the specific text styling and location techniques of Grutter to make printing a bit easier (Grutter, 4:42-4:45). As mentioned, this would be done typically due to engraving, however, the various configurations of the software would also allow this when projecting text outwards as well, meaning a better printed outcome when using various configurations within the software. PNG media_image1.png 796 1200 media_image1.png Greyscale Figure 1 Regarding claim 6, the combination of Zhang, Sugano, Jaeger, Gabriel, Shimizu and Grutter teaches wherein the determining of the position of the three-dimensional text data comprises determining the position of the three-dimensional text data at which a first surface and a second surface of the three-dimensional text data are located inside the three-dimensional model and the distance from the reference point to the surface of the three-dimensional model is minimum (Grutter, 3:50-4:15 teaches engraving the 3D text into the 3D model [see fig. 2 of this action]); this shows all surfaces (inclusive of first and second) of 3D text inside the 3D model and the distance from the reference point of Shimizu to the surface of the 3D model would be minimum for the text because the text is flush to the 3D surface and the rotation is performed on the pivot point of any axes which includes the axis closest to the 3D text and model. The same motivations used in claim 5 apply here in claim 6. PNG media_image2.png 792 1190 media_image2.png Greyscale Figure 2 Regarding claim 15, the apparatus claim 15 recites similar limitations as method claim 5, and thus is rejected under similar rationale. Regarding claim 16, the apparatus claim 16 recites similar limitations as method claim 6, and thus is rejected under similar rationale. PNG media_image3.png 200 400 media_image3.png Greyscale Figure 3 PNG media_image4.png 200 400 media_image4.png Greyscale Figure 4 Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Cryptomight (Blender 3D - How to Create A Text Object and Join It to A Cube) [https://www.youtube.com/watch?v=bIfyoSfKU8A] 10:20-10:40 (see fig. 4 of this action) teaches discoloration of text due to original geometry of model being under while combining. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAUMAN U AHMAD whose telephone number is (703)756-5306. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /N.U.A./Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jan 04, 2024
Application Filed
Aug 28, 2025
Non-Final Rejection — §103, §112
Dec 03, 2025
Response Filed
Jan 12, 2026
Final Rejection — §103, §112
Mar 17, 2026
Request for Continued Examination
Mar 24, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592036
BLENDING ELEVATION DATA INTO A SEAMLESS HEIGHTFIELD
2y 5m to grant Granted Mar 31, 2026
Patent 12530807
METHODS AND SYSTEMS FOR COMPRESSING DIGITAL ELEVATION MODEL DATA
2y 5m to grant Granted Jan 20, 2026
Patent 12518472
DEFORMABLE NEURAL RADIANCE FIELDS
2y 5m to grant Granted Jan 06, 2026
Patent 12518482
VIRTUAL REPRESENTATIVE CONDITIONING SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12505601
CONTENT DISPLAY CONTROL DEVICE, CONTENT DISPLAY CONTROL METHOD, AND STORAGE MEDIUM STORING CONTENT DISPLAY CONTROL PROGRAM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
98%
With Interview (+19.8%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month