DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Allowable Subject Matter
Claim 7, 9, 18, and 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant's arguments filed 2/6/2026 have been fully considered but they are not persuasive.
Regarding to claim 1, the applicant argues that the cited art fails to teach or suggest: “generate, according to a determination that the distortion of the at least one portion of the mesh is at or above the threshold of parameterization, an output mesh including the one or more portions of the mesh to provide a 2D parameterization of the surface of the object.” as recited in claim 1.
The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for the following reasons:
Sharma [0064] teaches that output of the first machine learning model is used as an input for the second machine learning model: “The first machine learning model 108 partitions the mesh of the 3D object 112 into one or more 2D patches that are depicted at 304 by determining an output depicted at 302 which is the predicted assignment probability for all the vertices to each of the K patches. The predicted assignment probability is the set of vertices of the mesh that belongs to a specific category (i.e., the probability of vertices being assigned to each of the K patches or subset). The subset of vertices (Vk) of the one or more bound patches 304 are inputted to the forward mapping network 110A of the second machine learning model 110.”
Sharma continues by teaching the second machine learning model that includes a forward and backward mapping network. Sharma [0064] states: “The forward mapping network 110A determines the flat surface of one or more 2D patches of the 3D object depicted at 310 which is determined by mapping each vertex (Vk) in the one or more 2D patches to two-dimensional (2D) points on the two-dimensional (2D) plane.” and “The backward mapping network 110C predicts the corresponding three-dimensional (3D) position of the 2D points (u) that matches with the set of vertices of the mesh or the predicted assignment probability for all the vertices to each of the K patches 302.” The forward and backward mapping network allows the second machine learning model to determine the 2D texture coordinates that correlates back to the 3D object, which this provides a 2D parameterization of the surface of the object.
Additionally, in FIG. 4, Sharma teaches the parameterization of a bounded surface, which is parameterized by the second machine learning model and then the texture is mapped onto it. FIG. 4 shows that the end result of the process is the textured geometry, therefore, there must have been a 2D parameterization to be able to complete the texture mapping, which the second machine learning model outputs, as stated by Sharma [0065]: “The exemplary diagram of surface parameterization depicts that if system 100 receives the bounded surface of the 3D object 404, (i) the bounded surface of the 3D object 404 is directly parameterized by the second machine learning model 11, (ii) the texture is mapped on the one or more parameterized bound patches as depicted at 114.” Therefore, Sharma teaches the generation of an output mesh that provides a 2D parameterization of the surface of the object.
Claim 11 is not allowable due to similar reasons as discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, 5, 10-12, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20240193328 A1).
As per claim 1, Sharma teaches the claimed:
One or more processors comprising: one or more circuits to:
allocate one or more portions of a mesh in a three-dimensional (3D) space to one or more
processing units associated with the one or more circuits, the mesh associated with a surface of an object in the 3D space
(Sharma [0018]: “The server receives a mesh of the 3D object through a user device. The mesh includes a set of vertex positions or vertices, a set of faces that connect the set of vertex positions, a set of normals, and a set of vertex normal. The server includes a memory that stores a set of instructions and a processor that executes the set of instructions.”
Sharma teaches the one or more processors in [0027]: “…configured with instructions executable by one or more processors to cause the one or more processors to perform a method of automatically determining one or more two-dimensional (2D) patches corresponding to a three-dimensional (3D) object using machine learning models for enabling an improved texture mapping process on the 3D object.”
In the passage above, Sharma teaches a server (the one or more circuits) that receives a mesh of the 3D object (a mesh in a 3D space), where the mesh includes a set of vertex positions or vertices, a set of faces that connect the set of vertex positions, a set of normal, and a set of vertex normal (a mesh associated with a surface of an object in the 3D space), where the server also includes the one or more processors. The “allocating” mentioned in the claim is obvious to a person of ordinary skill in the art since Sharma teaches that the server has both the mesh and processor, therefore it would be obvious to say that the server allocates the mesh to the processor to execute the set of instructions.);
transform, using the one or more of the processing units, one or more of the portions of
the mesh into corresponding second meshes in a two-dimensional (2D) space
(Sharma [0027]: “The method includes automatically parameterizing each vertex in the plurality of 2D patches to two-dimensional (2D) points on a two-dimensional (2D) plane to enable the texture mapping process on the 3D object, the second machine learning model is retrained by providing a parameterized vertex of the plurality of 2D patches in the 2D points on the 2D plane to improve the texture mapping process further on the 3D object.”
Sharma teaches parameterizing (transforming), using the processors, the plurality of 2D patches (one or more portions of the mesh) to 2D points on a 2D plane (second meshes in a 2D space). The claimed language shows the mesh (in a 3D space) transforming into corresponding meshes in the 2D space, this is an example of parameterization, as mentioned in the passage above.
The plurality of 2D patches here is considered the one or more portions of the mesh because in paragraph [0027] it states: “The method includes determining, using the first machine learning model, one or more 2D patches by partitioning the mesh”, therefore the plurality of 2D patches would make up the mesh, where the mesh is a “a mesh of the 3D object”, as stated near the top of paragraph [0027]. The 2D points on a 2D plane here are considered the second meshes in a 2D space because parameterization of a 3D mesh will result in a 2D mesh, which is made up of 2D points on a 2D plane.);
segment, by the one or more of the processing units and according to a determination that
a distortion of at least one of the one or more portions of the mesh is below a threshold of parameterization, the at least one portion of the mesh into at least two further portions of the mesh, wherein the further portions are among the one or more portions of the mesh
( Sharma [0027]: “The method includes determining, using the first machine learning model, one or more 2D patches by partitioning the mesh until a distortion of the mesh reaches a threshold distortion.” and Sharma [0046]: “In some embodiments, the server 106 partitions the mesh of the 3D object 110 into one or more 2D patches based on an acceptable amount of distortion in the mesh which means a degree of distortion that is considered, or suitable within the predefined limits for more efficient partitioning the mesh 112.”
Sharma teaches the processors that partitions (segments) the mesh until a distortion of the mesh reaches a threshold distortion (a distortion of at least one of the one or more portions of the mesh is below a threshold of parameterization). In the passage above, partitioning is mentioned, and partitioning is the process of dividing complex meshes into smaller and simpler sub-meshes, and therefore the claimed language that says: “the at least one portion of the mesh into at least two portions of the mesh, wherein the further portions are among the one or more portions of the mesh” is clearly obvious to say that the method of partitioning covers this part of the claim.); and
Sharma does not teach but suggests the claimed:
generate, according to a determination that the distortion of the at least one portion of the
mesh is at or above the threshold of parameterization, an output mesh including the one or more portions of the mesh to provide a 2D parameterization of the surface of the object to provide a 2D parameterization of the surface of the object.
(Sharma [0027]: “The method includes determining, using the first machine learning model, one or more 2D patches by partitioning the mesh until a distortion of the mesh reaches a threshold distortion.” and Sharma [0046]: “The distortion in the mesh means the degree to which the original shape or structure of the mesh has been altered or deviates from an ideal state … In some embodiments, the server 106 partitions the mesh of the 3D object 110 into one or more 2D patches based on an acceptable amount of distortion in the mesh which means a degree of distortion that is considered, or suitable within the predefined limits for more efficient partitioning the mesh 112.”
This implies that the mesh is no longer partitioned and thus is generated when the amount of distortion caused by further partitioned is above a predefined limit (at or above the threshold of parameterization).
Also, please see Sharma in [0047] “The server 106 determines the 2D patches by partitioning the mesh until a distortion of the mesh reaches a threshold distortion using the first machine learning model.”
Sharma suggests the generation of an output mesh in [0064-0065], as the first machine learning model is an input to the second machine learning model: “The first machine learning model 108 partitions the mesh of the 3D object 112 into one or more 2D patches that are depicted at 304 by determining an output depicted at 302 which is the predicted assignment probability for all the vertices to each of the K patches. The predicted assignment probability is the set of vertices of the mesh that belongs to a specific category (i.e., the probability of vertices being assigned to each of the K patches or subset). The subset of vertices (Vk) of the one or more bound patches 304 are inputted to the forward mapping network 110A of the second machine learning model 110.” Then the second machine learning model, which includes a forward mapping network that “determines the flat surface of one or more 2D patches of the 3D object” and a backward mapping network that “predicts the corresponding three-dimensional (3D) position of the 2D points (u) that matches with the set of vertices of the mesh” This allows the second machine learning model to determine the 2D texture coordinates that correlates back to the 3D object, which provides a 2D parameterization of the surface of the object.
Additionally, in FIG. 4, it shows a surface parameterization that is used to create a textured geometry, this indicates that the output of the second machine learning model provides a 2D parameterization of a surface of an object, which would be used to complete a texture mapping, thus creating the textured geometry. Sharma [0065] also states: “The exemplary diagram of surface parameterization depicts that if system 100 receives the bounded surface of the 3D object 404, (i) the bounded surface of the 3D object 404 is directly parameterized by the second machine learning model 11, (ii) the texture is mapped on the one or more parameterized bound patches as depicted at 114.”)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to generate an output mesh according to a determination that the distortion of the at least one portion of the mesh is at or above the threshold of parameterization suggested by Sharma in order to generate an output mesh where the partitioned are not distorting the original shape of the mesh by too large of an amount.
As per claim 2, Sharma further teaches:
The one or more processors of claim 1, wherein the one or more circuits are to:
segment, using the one or more of the processing units and according to a path along one or more
edges of the mesh, the mesh into the one or more portions of the mesh. (Sharma [0019]: “In some embodiments, the objective function of the patch extraction includes at least one of a cosine similarity constraint, or a geodesic distance constraint. The cosine similarity constraint is determined by calculating a cosine similarity between normal vectors of the historic faces within the historic meshes. The geodesic distance constraint is determined by calculating a shortest path between the historic vertices within the historic meshes based on the cosine similarity between the normal vectors of the historic faces.”
Sharma teaches patch extraction (segmenting) that includes a cosine similarity constraint or a geodesic distance constraint (a path along one or more edges of the mesh). Paragraph [0018], “The processor is configured to train a first machine learning model by providing a correlation between historic vertices with (a) historic faces, and (b) historic vertex normals of historic meshes based on an objective function of patch extraction as first training data.” shows that it is the processor, or processing units, that extracts patches, or segments, the one or more portions of the mesh using either of the mentioned constraints.)
As per claim 4, Sharma further teaches:
4. The one or more processors of claim 1, wherein the one or more circuits are to:
identify, using the one or more of the processing units, a node of the mesh; and determine a path through the node. (Sharma [0024]: “In some embodiments, the processor is configured to partition the mesh by (i) receiving the set of vertex normals and the set of vertices of the mesh, (ii) predicting probabilities of the set of vertices associated with the one or more 2D patches by processing the set of vertex normals and the set of vertices, (iii) obtaining the probabilities of the set of faces by averaging neighboring probabilities of the set of vertices for each face, and (iv) partitioning the mesh by assigning the set of faces into the one or more 2D patches based on the probabilities of the set of vertices. The probabilities of the set of vertices are predicted by analyzing surface characteristics of the mesh at a multi-scale characterization.”
Sharma teaches receiving (identifying), using the processor (one or more processing units), the set of vertices of the mesh (node of the mesh) and then partition the mesh based on the probabilities of the set of vertices. It would be obvious to say that if the processor can partition the mesh according to the probabilities, then it would be able to determine the path through the set of vertices.)
As per claim 5, Sharma further teaches:
The one or more processors of claim 1, wherein a node of the mesh is located at a position in the mesh having a curvature
meeting a threshold associated with a curvature of the mesh in the 3D space.
(Sharma [0073]: “The exemplary diagram in FIG. 6 depicts one or more bound patches with the geodesic loss at 604. This means incorporating the geodesic loss (Lgeo) into the objective function of the patch extraction calculating for the shortest path between two points on a curved surface of the input mesh 112. By including the geodesic loss(Lgeo), the objective function of the patch extraction is modified to correct the subset of faces that are geodesically far apart but are assigned to the same patch. This correction is used to avoid a creation of unwanted patches with extreme curvature that improve an accuracy of the predicted assignment probability during the partition of the input mesh 112.”
Sharma teaches points (nodes) on a curved surface of an input mesh (mesh having a curvature) which includes the geodesic loss objective function, which is used to avoid a creation of unwanted patches with extreme curvature. The function here is the threshold associated with a curvature because it is used to limit unwanted patches with extreme curvature.)
As per claims 11, 12, 14, 15, these claims are similar in scope to limitations recited in claims 1, 2, 4, 5, respectively, and thus are rejected under the same rationale.
As per claim 10, Sharma further teaches:
The one or more processors of claim 1, wherein the one or more processors are comprised in at least one of:
a system comprising one or more large language models (LLMs);
a system comprising one or more generative artificial intelligence (Al) models;
a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for 3D assets;
a system for performing deep learning operations;
a system implemented using an edge device;
a system implemented using a robot;
a system for performing conversational AI operations;
a system for generating synthetic data;
a system implemented using one or more language models;
a system implemented using one or more large language models (LLMs);
a system implemented using one or more vision language models (VLMs);
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center; or
a system implemented at least partially using cloud computing resources.
(Sharma [0018]: “In one aspect, a system for automatically determining a plurality of two-dimensional (2D) patches corresponding to a three-dimensional (3D) object using machine learning models for enabling an improved texture mapping process on the 3D object is provided.”
Sharma teaches that the processor can use and consist of machine learning models, which encompasses large language models, vision language models, AI models, large language models, etc. which are included in the claimed language. Sharma’s system is also suggests being implemented for performing a simulation, e.g. in [0003] “… Determining UV parameterization of arbitrary 3D surfaces lies at the core of computer graphics and geometry processing domain, with a wide range of applications such as 3D modeling, texture mapping, meshing, simulation, etc.“. Sharma also generates synthetic data, e.g. texture map data in figure 4 in piece 114).
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20240193328 A1) in view of Casas et al. (NPL Doc, “Flat-Sphere Perspective”) in further view of Wischmann (Patent No. US 5,761,332 A).
As per claim 3, Sharma teaches or suggests all of the limitations of claim 2 as set forth above.
Sharma alone does not explicitly teach the remaining claim limitations.
However, Sharma in combination with Casas and Wischmann teaches the claimed:
The one or more processors of claim 2, wherein the one or more circuits are to:
segment the mesh according to a determination that at least one of the mesh or the object is a
topological sphere that cannot be flattened.
(Casas teaches that it was known in the art that a topological sphere that cannot be perfectly flattened, e.g. please see Casas on page 3, in the 1st paragraph in section III which recites: “… the sphere and the plane are topologically different surfaces, which means that a spherical surface cannot be mapped onto a planar surface with perfect uniqueness or continuity”. Thus, Casas teaches that a topological sphere cannot be perfectly flattened without introducing some distortion.
Wischmann teaches it was known in the art to approximate a sphere’s surface by segmenting the spherical object into a plurality of planar surfaces, e.g. please see Wischmann in figures 5a-5b where the sphere is subdivided (i.e. segmenting) to approximate its shape. Also, please see Wischmann in col 5, lines 37-44 “This is illustrated in the FIGS. 5a and 5b. In FIG. 5a a sphere is approximately described by 42 vertices and 80 triangles. The use of the method in accordance with the invention results in a spherical surface as shown in FIG. 5b, comprising 162 vertices and 320 sub-triangles. The various grey levels indicate how the sub-triangles spatially emerge from the flat triangles of the description in conformity with FIG. 5a”.
Wischmann in figure 5a-b also shows that as the approximate outer surface is subdivided (i.e. segmenting) more in smaller primitives, the approximation becomes closer to the original spherical surface shape).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to segment a mesh according to the determination that a 3D shape cannot be flattened as taught by Casas and Wischmann with the system of Sharma. Casas provides an advantage because it demonstrates a spherical object may need to be approximated if it is to be flattened. Wischmann teaches of how to do this approximation by using segmenting of the spherical surface to segment it into a plurality of smaller planar shapes.
As per claim 13, this claim is similar in scope to limitations recited in claim 3, and thus are rejected under the same rationale.
Claims 6 and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20240193328 A1) in view of Cao (US 20250045980 A1).
As per claim 6, Sharma teaches or suggests all of the limitations of claim 1 as set forth above.
Sharma does not explicitly teach the remaining claim limitations.
However, Sharma in combination with Cao teaches the claimed:
The one or more processors of claim 1, wherein the one or more circuits are to:
segment, using the one or more of the processing units and according to a determination that an
overlap of at least two of the one or more of the portions of the mesh is below a second threshold of parameterization, the at least one portion of the mesh into two further portions of the mesh, the further portions among the one or more portions of the mesh.
(Cao [0071]: “The distortion processor 1046 can identify a distortion metric associated with one or more distortion thresholds of a particular viewpoint.”
Cao [0072]: “For example, the distortion processor 1046 can identify one or more portions of a surface detected by a particular viewpoint as within or outside one or more of a minimum distance threshold and a maximum distance threshold. For example, the distortion processor 1046 can identify which portions of a detected surface are within thresholds for distortion by a diffusion model. For example, the distortion processor 1046 can identify portions of a surface on a pixel-by-pixel basis. The surface selector 1048 can select portions of detected surfaces to generate a surface projection corresponding to a surface of a 3D object. For example, the surface selector 1048 can identify portions of a detected surface from one or more viewpoints, and can generate a surface projection by combining portions of the detected surface that are determined by the distortion processor 1046 as satisfying one or more distance thresholds.”
Cao teaches the distortion processor (processing units) and the identification of distortion metrics (determination of an overlap of the mesh) associated with one or more distortion thresholds (second threshold), more specifically the maximum and minimum threshold. Also see paragraph [0059], where it talks about the second near and far thresholds. The surface selector, in paragraph [0072], is considered the segmenting of the mesh, as it selects portions of detected surfaces, which it can then combine to generate a new output mesh.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to segment a mesh according to the determination of the second threshold as taught by Cao with the system of Sharma in order to test further whether or not the mesh is below or within a second threshold, and then segment it according to the second threshold.
As per claims 16 and 17, these claims are similar in scope to limitations recited in claim 6, and thus are rejected under the same rationale.
Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20240193328 A1) in view of Pielawa (US 9460519 B2).
As per claim 17, Sharma teaches or suggests all of the limitations of claim 1 as set forth above.
Sharma does not explicitly teach the claim limitations.
However, Sharma in combination with Pielawa teaches the claimed:
19. The method of claim 17, wherein a difference between sizes of the two further portions of the
mesh meets or exceeds an equality threshold associated with a difference in size between the two further portions of the mesh.
(Pielawa [0020]: “There may be provided a computer that includes a processor that may be configured to receive or generate a mesh, wherein the mesh may be a three dimensional surface mesh that may include multiple faces and represents a three dimensional object. Find a first cut that segments the mesh to a first part and a second part. The finding of the first cut may include applying, by the processor, an iterative process that may include: calculating, by applying a cost function, multiple costs associated with multiple intermediate cuts that result from multiple different allocations of faces of a set of faces of the mesh to multiple intermediate first and second parts; and selecting the first cut in response to values of the multiple costs; wherein a cost of a certain intermediate cut that represents a certain allocation of faces between certain intermediate first and second parts may be responsive to (a) a length of the certain intermediate cut, and (b) a difference between areas of the first and second intermediate parts.”
Pielawa teaches the calculating costs associated with cutting, or segmenting, a mesh, where the cost represents a difference between areas of the first and second intermediate parts (a difference in size between the two further portions of the mesh.). The cost function here acts as a threshold since the cuts are made in response to the value it calculates between the areas of the first and second parts.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the cost function as taught by Pielawa with the system of Sharma in order to test whether or not the two further portions of the input mesh meet or exceeds the equality threshold by calculating the difference between areas of the two further portions of the mesh.
As per claim 19, this claim is similar in scope to limitations recited in claim 8, and thus are rejected under the same rationale.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Sharma (US 20240193328 A1) in view of Zhao (US 20200211230 A1).
As per claim 21, the reasons and rationale for the rejection of claim 1 is incorporated herein. In particular, only additional features unique to claim 21 that were not present in claim 1 will be explicitly addressed here.
Sharma does not explicitly teach the remaining claim limitations.
However, Sharma in combination with Zhao teaches the claimed: (strikethrough taught in claim 1)
21.
(Zhao [0186]: “Each segment is then parameterized. If a segment fails to be parameterized, the segment is deemed a complex mesh and is broken down into simpler segments by repeating the steps 1504, 1506, and 1508 on the segment.”
Zhao teaches the transforming (parametrizing) and breaking down segments into smaller segments (segmenting) by repeating steps when the segment fails to be parameterized, which corresponds to the determination that is at or below the threshold. Furthermore, in FIG. 15, it shows a diagram of the process that obtains the mesh, partitions it, parameterizes it, and if it fails repeats those steps before being able to move onto generating an output mesh.
Please note: the examiner is only incorporating the process of repeating steps when the segment is at or below a certain threshold)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the repetition of steps as taught by Zhao with the system of Sharma in order to break down the segments into smaller parts so that parameterization can be completed at a higher speed.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA SUO whose telephone number is (571)272-8387. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA JUNGWOOK SUO/Examiner, Art Unit 2616
/DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616