DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to Applicant’s amendment/response filed on 28 January 2026, which has been entered and made of record.
Response to Arguments
Applicant’s arguments have been fully considered but they are moot in view of the new grounds of rejection presented in this Office Action.
Claim Rejections - 35 USC § 112
The previous rejections under 35 U.S.C. 112(a) as failing to comply with the written description requirement are withdrawn in view of the claim amendments.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5 and 17-25 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang et al. (“NeRF++: Analyzing and Improving Neural Radiance Fields”; hereinafter “Zhang”) in view of Jeong et al. (US 2022/0317453; hereinafter “Jeong”).
Regarding claim 1, Zhang discloses A method comprising: determining sample points on a camera ray (“sample points from the ray origin … the number of samples per camera ray,” pg. 6, sec. 5, para. 2) based on view-generation information comprising a scene viewing-position and a scene-viewing direction (“the radiance field is parameterized by both 3D position and viewing direction,” pg. 2, sec. 2, para. 1); determining, based on a virtual (“unit sphere,” pg. 5, para. 1), location statuses of the respective sample points; determining whether the location status of a corresponding sample point among the sample points is foreground or background (“partition the scene space into two volumes, an inner unit sphere and an outer volume … The inner volume contains the foreground and all the cameras, while the outer volume contains the remainder of the environment,” pg. 5, para. 1; “the ray
r
=
o
+
t
d
is partitioned into two segments by the unit sphere … inside the sphere … outside the sphere,” pg. 5, para. 3); projecting and rendering a value of a pixel corresponding to the camera ray by, according to the location statuses, applying the view-generation information to a first neural network to generate a foreground image and to a second neural network to generate a background image, wherein the first neural network has been trained to generate foreground images and the second neural network has been trained to generate background images (“partition the scene space into two volumes, an inner unit sphere and an outer volume … The inner volume contains the foreground and all the cameras, while the outer volume contains the remainder of the environment. These two volumes are modelled with two separate NeRFs,” pg. 5, para. 1; Fig. 6 illustrates the claimed camera ray); and generating a rendered image based on blending the foreground image and the background image (“To render the color for a ray, they are raycast individually, followed by a final composition,” pg. 5, para. 2), wherein the determination of whether the location status of the corresponding sample point among the plurality of sample points is foreground or background is based on a position (“inside the sphere … outside the sphere,” pg. 5, para. 3; “partition the scene space into two volumes … These two volumes are modelled with two separate NeRFs,” pg. 5, paras. 1-2).
Zhang does not disclose that the coordinate system is a cylindrical coordinate system, or that the location status determination is based on a direction of the sample point.
In the same art of computer graphics, Jeong teaches partitioning a 3D space using a cylindrical coordinate system (“set the focus space having a cylindrical shape,” para. 117) and determining a location status based on a direction of the sample point (“determine the first area on the transparent members 290-1 and 290-2 [i.e. the HMD glasses], in which the at least one first object is seen, and the second area on the transparent members 290-1 and 290-2, in which the at least one second object is seen, based on … the direction of a camera … and based on the distances between the HMD device and the at least one first object and the at least one second object,” para. 148).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the cylindrical and directional teachings of Jeong to Zhang. The motivation would have been to increase flexibility by allowing for interchangeable coordinate systems and allowing a user to more flexibly determine what constitutes foreground and background in a scene. Note that Jeong illustrates cylindrical and spherical boundaries in Figs. 5 and 8, respectively, meaning they are interchangeable. Additionally, substituting the cylindrical coordinate system and cylindrical spatial boundary of Jeong for the spherical coordinate system and spherical spatial boundary of Zhang would amount to simple substitution of one known element for another to obtain predictable results, because both coordinate systems and spatial boundaries were known in the prior art and one having ordinary skill in the art would have found it straightforward to use either of the two coordinate systems and spatial boundaries in Zhang to arrive at predictable results.
Regarding claim 2, the combination of Zhang and Jeong renders obvious determining a distance between the center and each sample point, and comparing the distances with the radius; and wherein the determination of whether the location status of a corresponding sample point among the sample points is foreground or background is determined based on results of the comparison (“a 3D point
x
,
y
,
z
,
r
=
x
2
+
y
2
+
z
2
> 1 in the outer volume,” Zhang, pg. 5, para. 3).
Regarding claim 3, the combination of Zhang and Jeong renders obvious determining the location status of a corresponding sample point as foreground based on the corresponding distance being smaller than the radius; and determining the location of a corresponding sample point as background based on the corresponding distance being greater than or equal to the radius (“a 3D point
x
,
y
,
z
,
r
=
x
2
+
y
2
+
z
2
> 1 in the outer volume … the ray
r
=
o
+
t
d
is partitioned into two segments by the unit sphere … inside the sphere … outside the sphere,” Zhang, pg. 5, para. 3; note that using a “greater than” or “greater than or equal to” operator to determine an interior or exterior of a bounding volume is considered a trivial and obvious variation).
Regarding claim 4, the combination of Zhang and Jeong renders obvious applying the view-generation information to the first neural network in response to a location status of a corresponding sample point being determined to be foreground (“The inner volume contains the foreground and all the cameras … These two volumes are modelled with two separate NeRFs,” Zhang, pg. 5, para. 1).
Regarding claim 5, the combination of Zhang and Jeong renders obvious based on a location status of a corresponding sample point being determined to be background, changing the view-generation information to comprise the inverse of the radius; and applying the changed view-generation information to the second neural network (“For the outer NeRF, we apply an inverted sphere parametrization,” Zhang, pg. 5, para. 2; “the inverse radius,” Zhang, pg. 5, para. 3; see Fig. 6).
Regarding claim 17, it is rejected using the same citations and rationales described in the rejection of claim 1, with the additional limitations of An apparatus comprising: processing hardware; storage hardware storing instructions configured to configure the processing hardware, which is taught by Zhang (“RTX 2080 Ti GPUs,” pg. 7, sec. 6, para. 1).
Regarding claim 18, the combination of Zhang and Jeong renders obvious rendering an image based on blending a result of applying the first neural network and the second neural network (“To render the color for a ray, they are raycast individually, followed by a final composition,” Zhang, pg. 5, para. 2).
Regarding claims 19 and 20, they are rejected using the same citations and rationales described in the rejections of claims 2 and 3, respectively.
Regarding claim 21, it is rejected using the same citations and rationales described in the rejections of claims 4 and 5.
Regarding claim 22, Zhang discloses A method performed by a computing device, the method comprising: projecting camera rays from a virtual camera pose to a scene to determine sample points of the respective camera rays (“sample points from the ray origin … the number of samples per camera ray,” pg. 6, sec. 5, para. 2); according to a virtual [sphere] defining by a center with a radius (“unit sphere,” pg. 5, para. 1) and the virtual camera pose, determining first of the sample points as corresponding to a foreground of the scene and based thereon generating respective first pixel values of the scene by applying the virtual camera pose to a first neural network; according to the virtual [sphere] and the virtual camera pose, determining second of the sample points as corresponding to a background of the scene and based thereon generating respective second pixel values of the scene by applying a transform of the virtual camera pose to a second neural network (“partition the scene space into two volumes, an inner unit sphere and an outer volume represented by an inverted sphere covering the complement of the inner volume,” pg. 5, para. 1; “These two volumes are modelled with two separate NeRFs,” pg. 5, para. 2; “the ray
r
=
o
+
t
d
is partitioned into two segments by the unit sphere … inside the sphere … outside the sphere,” pg. 5, para. 3); and rendering an image of the scene by blending the first pixel values and the second pixel values (“To render the color for a ray, they are raycast individually, followed by a final composition,” pg. 5, para. 2), wherein the determination of the first of the sample points as corresponding to the foreground of the scene and the determination of the second of the sample points as corresponding to the background of the scene are based on a position (“inside the sphere … outside the sphere,” pg. 5, para. 3; “partition the scene space into two volumes … These two volumes are modelled with two separate NeRFs,” pg. 5, paras. 1-2).
Zhang does not disclose using a virtual cylinder, or that the location status determination is based on a direction of the sample point.
In the same art of computer graphics, Jeong teaches partitioning a 3D space using a virtual cylinder (“set the focus space having a cylindrical shape,” para. 117) and determining a location status based on a direction of the sample point (“determine the first area on the transparent members 290-1 and 290-2 [i.e. the HMD glasses], in which the at least one first object is seen, and the second area on the transparent members 290-1 and 290-2, in which the at least one second object is seen, based on … the direction of a camera … and based on the distances between the HMD device and the at least one first object and the at least one second object,” para. 148).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the cylindrical and directional teachings of Jeong to Zhang. The motivation would have been to increase flexibility by allowing for interchangeable coordinate systems and allowing a user to more flexibly determine what constitutes foreground and background in a scene. Note that Jeong illustrates cylindrical and spherical boundaries in Figs. 5 and 8, respectively, meaning they are interchangeable. Additionally, substituting the cylindrical coordinate system and cylindrical spatial boundary of Jeong for the spherical coordinate system and spherical spatial boundary of Zhang would amount to simple substitution of one known element for another to obtain predictable results, because both coordinate systems and spatial boundaries were known in the prior art and one having ordinary skill in the art would have found it straightforward to use either of the two coordinate systems and spatial boundaries in Zhang to arrive at predictable results.
Regarding claim 23, the combination of Zhang and Jeong renders obvious wherein determining that a first sample point of the first sample points corresponds to the foreground corresponds to determining that the first sample point is inside the virtual cylinder, and wherein determining that a second sample point of the second sample points corresponds to the background corresponds to determining that the second sample point is outside the virtual cylinder (“the ray
r
=
o
+
t
d
is partitioned into two segments by the unit sphere … inside the sphere … outside the sphere,” Zhang, pg. 5, para. 3; “the focus space having a cylindrical shape,” Jeong, para. 117; see claim 22 for motivation to combine).
Regarding claim 24, the combination of Zhang and Jeong renders obvious wherein the transform is based on a radius of a virtual cylinder used to determine the first sample points as corresponding to the foreground and to determine the second sample points as corresponding to the background (“a 3D point
x
,
y
,
z
,
r
=
x
2
+
y
2
+
z
2
> 1 in the outer volume,” Zhang, pg. 5, para. 3; “the focus space having a cylindrical shape,” Jeong, para. 117; see claim 22 for motivation to combine).
Regarding claim 25, the combination of Zhang and Jeong renders obvious wherein the first neural network and the second neural network are trained by minimizing a loss function (“minimizing the discrepancy between the ground truth observed images … and the predicted images,” Zhang, pg. 2, sec. 2, para. 2) based on a 360-degree image (“training/testing images … of hand-held 360° captures,” Zhang, pg. 6, sec. 5, para. 2).
Claims 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Zhang and Jeong, and further in view of Rematas et al. (US 2023/0281913; hereinafter “Rematas”).
Regarding claim 6, the combination of Zhang and Jeong renders obvious wherein the first neural network comprises: a neural network trained to output a color (“NeRF is an implicit MLP-based model that maps 5D vectors—3D coordinates plus 2D viewing directions—to opacity and color values,” Zhang, pg. 1, sec. 1, para. 2; “the focus space having a cylindrical shape,” Jeong, para. 117; see claim 1 for motivation to combine).
Zhang refers to “opacity” rather than a volume density.
In the same art of using separate NeRFs for foreground and background samples, Rematas teaches a neural network trained to output a color and volume density (“the predicted view synthesis output can include one or more predicted color values … and one or more predicted opacity values (e.g., a probability value associated with the density of occupancy at given positions in a rendering),” para. 43).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Rematas to the combination of Zhang and Jeong. The motivation would have been “to produce more accurate outputs” (Rematas, para. 47).
Regarding claim 7, the combination of Zhang and Jeong renders obvious wherein the second neural network comprises: a neural network trained to output a color (“NeRF is an implicit MLP-based model that maps 5D vectors—3D coordinates plus 2D viewing directions—to opacity and color values,” Zhang, pg. 1, sec. 1, para. 2; “the focus space having a cylindrical shape,” Jeong, para. 117; see claim 1 for motivation to combine).
Zhang refers to “opacity” rather than a volume density.
In the same art of using separate NeRFs for foreground and background samples, Rematas teaches a neural network trained to output a color and volume density (“the predicted view synthesis output can include one or more predicted color values … and one or more predicted opacity values (e.g., a probability value associated with the density of occupancy at given positions in a rendering),” para. 43).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Rematas to the combination of Zhang and Jeong. The motivation would have been “to produce more accurate outputs” (Rematas, para. 47).
Regarding claim 8, the combination of Zhang and Jeong renders obvious projecting and rendering a color (“These two volumes are modelled with two separate NeRFs. To render the color for a ray, they are raycast individually, followed by a final composition,” Zhang, pg. 5, para. 2).
Zhang refers to “opacity” rather than a volume density.
In the same art of using separate NeRFs for foreground and background samples, Rematas teaches projecting and rendering a color and volume density of a pixel (“the predicted view synthesis output can include one or more predicted color values … and one or more predicted opacity values (e.g., a probability value associated with the density of occupancy at given positions in a rendering),” para. 43).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Rematas to the combination of Zhang and Jeong. The motivation would have been “to produce more accurate outputs” (Rematas, para. 47).
Claims 9-16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Jeong, and further in view of Matthews et al. (US 2024/0371081; hereinafter “Matthews”).
Regarding claim 9, Zhang discloses A method comprising: … an input image captured by a 360-degree camera (“training/testing images … of hand-held 360° captures,” pg. 6, sec. 5, para. 2); encoding location statuses of respective points sampled for each of camera rays formed (“samples per camera ray,” pg. 6, sec. 5, para. 2; “the ray
r
=
o
+
t
d
,” pg. 5, para. 3), based on a virtual (“unit sphere,” pg. 5, para. 1); determining, based on the virtual (“partition the scene space into two volumes, an inner unit sphere and an outer volume … The inner volume contains the foreground and all the cameras, while the outer volume contains the remainder of the environment,” pg. 5, para. 1; “the ray
r
=
o
+
t
d
is partitioned into two segments by the unit sphere … inside the sphere … outside the sphere,” pg. 5, para. 3); adding and rendering pixel values obtained by determining which of a first neural network or a second neural network to apply the encoded points to based on the location statuses, wherein the first neural network is configured to generate a foreground image and the second neural network is configured to generate a background image (“partition the scene space into two volumes, an inner unit sphere and an outer volume … The inner volume contains the foreground and all the cameras, while the outer volume contains the remainder of the environment. These two volumes are modelled with two separate NeRFs,” pg. 5, para. 1; “ray-casts both inner and outer volumes,” pg. 6, sec. 5, para. 2); and training the first neural network and the second neural network based on a pixel value of a camera ray, the pixel value obtained by blending a rendering result for each of the camera rays (“To render the color for a ray, they are raycast individually, followed by a final composition,” pg. 5, para. 2).
Zhang does not disclose that the coordinate system is a cylindrical coordinate system, or that the location status determination is based on a direction of the sample point.
In the same art of computer graphics, Jeong teaches partitioning a 3D space using a cylindrical coordinate system (“set the focus space having a cylindrical shape,” para. 117) and determining a location status based on a direction of the sample point (“determine the first area on the transparent members 290-1 and 290-2 [i.e. the HMD glasses], in which the at least one first object is seen, and the second area on the transparent members 290-1 and 290-2, in which the at least one second object is seen, based on … the direction of a camera … and based on the distances between the HMD device and the at least one first object and the at least one second object,” para. 148).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the cylindrical and directional teachings of Jeong to Zhang. The motivation would have been to increase flexibility by allowing for interchangeable coordinate systems and allowing a user to more flexibly determine what constitutes foreground and background in a scene. Note that Jeong illustrates cylindrical and spherical boundaries in Figs. 5 and 8, respectively, meaning they are interchangeable. Additionally, substituting the cylindrical coordinate system and cylindrical spatial boundary of Jeong for the spherical coordinate system and spherical spatial boundary of Zhang would amount to simple substitution of one known element for another to obtain predictable results, because both coordinate systems and spatial boundaries were known in the prior art and one having ordinary skill in the art would have found it straightforward to use either of the two coordinate systems and spatial boundaries in Zhang to arrive at predictable results.
The combination of Zhang and Jeong does not disclose generating pose information corresponding to at least one object included in each image frame of a plurality of image frames or encoding location statuses based on the pose information.
In the same art of using separate NeRFs for foreground and background samples, Matthews teaches generating pose information corresponding to at least one object included in each image frame of a plurality of image frames(“determine an orientation of the object depicted and/or for depth determination for specific features of the object,” para. 43; “The foreground may be the object of interest,” para. 51) and encoding location statuses based on the pose information (“The sample outputs from the landmarker and the segmenter networks for the two input identities can convey the location of the foreground object and the location of specific characterizing features,” para. 130; “the systems and methods can include foreground-background decomposition … a separate model can be used to handle the generation of background details,” para. 165).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Matthews to the combination of Zhang and Jeong. The motivation would have been to “reconstruct images directly with a more efficient and scalable stochastic sampling process” (Matthews, para. 154).
Regarding claim 10, the combination of Zhang, Jeong, and Matthews renders obvious estimating first pose information corresponding to at least a first object and a second object included in each of the image frames (“a plurality of different objects,” Matthews, para. 6; “determine an orientation of the object depicted and/or for depth determination for specific features of the object,” Matthews, para. 43; see claim 9 for motivation to combine); separating and encoding points corresponding to the first object as foreground based on a direction from which the 360-degree camera views the first object being inside a virtual cylinder corresponding to the virtual cylindrical coordinate system, and separating and encoding points corresponding to the second object as background based on being outside the virtual cylinder, based on the virtual cylindrical coordinate system (“partition the scene space into two volumes, an inner unit sphere and an outer volume,” Zhang, pg. 5, para. 1; “the ray
r
=
o
+
t
d
is partitioned into two segments by the unit sphere … inside the sphere … outside the sphere,” Zhang, pg. 5, para. 3; Fig. 7 of Zhang illustrates a foreground truck object and multiple background objects; “the focus space having a cylindrical shape,” Jeong, para. 117; see claim 9 for motivation to combine; note that Zhang, as modified by Jeong, uses a virtual cylindrical coordinate system based on a virtual cylinder instead of a sphere).
Regarding claim 11, the combination of Zhang, Jeong, and Matthews renders obvious based on the determining of the location statuses, encoding each location status of the respective points (“the ray
r
=
o
+
t
d
is partitioned into two segments by the unit sphere … inside the sphere … outside the sphere,” Zhang, pg. 5, para. 3; “a 3D point
x
,
y
,
z
,
r
=
x
2
+
y
2
+
z
2
> 1 in the outer volume,” Zhang, pg. 5, para. 3).
Regarding claim 12, the combination of Zhang, Jeong, and Matthews renders obvious for each of the camera rays, determining distances between the center and the respectively corresponding points, and comparing the distances with the radius; and for each of the camera rays, based on results of the comparison, determining the location statuses of the respectively corresponding points as foreground or background (“a 3D point
x
,
y
,
z
,
r
=
x
2
+
y
2
+
z
2
> 1 in the outer volume,” Zhang, pg. 5, para. 3;).
Regarding claim 13, the combination of Zhang, Jeong, and Matthews renders obvious determining the location status of a corresponding point as foreground based on the corresponding distance being less than the radius; and determining the location status of a corresponding point as background based on the corresponding distance being greater than or equal to the radius (“a 3D point
x
,
y
,
z
,
r
=
x
2
+
y
2
+
z
2
> 1 in the outer volume … the ray
r
=
o
+
t
d
is partitioned into two segments by the unit sphere … inside the sphere … outside the sphere,” pg. 5, para. 3; note that using a “greater than” or “greater than or equal to” operator to determine an interior or exterior of a bounding volume is considered a trivial and obvious variation).
Regarding claim 14, the combination of Zhang, Jeong, and Matthews renders obvious wherein the rendering comprises at least one of: first rendering by the first neural network according to the location status of a corresponding point being foreground; or second rendering by the second neural network according to the location status of a corresponding point being background (“These two volumes are modelled with two separate NeRFs. To render the color for a ray, they are raycast individually, followed by a final composition,” Zhang, pg. 5, para. 2).
Regarding claim 15, the combination of Zhang, Jeong, and Matthews renders obvious training the first neural network and the second neural network based on a difference between a pixel value generated by blending a result of the first rendering and a result of the second rendering (“To render the color for a ray, they are raycast individually, followed by a final composition,” Zhang, pg. 5, para. 2; “minimizing the discrepancy between the ground truth observed images … and the predicted images,” Zhang, pg. 2, sec. 2, para. 2).
Regarding claim 16, the combination of Zhang, Jeong, and Matthews renders obvious A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform the method of claim 9 (“Training NeRF++ on a node with 4 RTX 2080 Ti GPUs,” Zhang, pg. 7, sec. 6, para. 1).
Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Zhang and Jeong, and further in view of Yano (US 2021/0176450).
Regarding claim 26, the combination of Zhang and Jeong does not disclose wherein the virtual camera pose is positioned outside the virtual cylinder.
In the same art of novel view generation, Yano teaches wherein the virtual camera pose is positioned outside the virtual [boundary] (“the setting region to be a region in a three-dimensional space in which a three-dimensional model of an object is present,” para. 52; “the virtual camera can move within the movable range … and can also move to the outside of the movable range,” para. 59).
Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Yano to the virtual cylinder boundary of the combination of Zhang and Jeong. The motivation would have been “the degree of freedom with which the virtual camera can be navigated can be expanded” (Yano, para. 76).
Conclusion
Applicant's amendment necessitated any new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN MCCULLEY/Primary Examiner, Art Unit 2611