Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The IDS of (08/02/2024) has been considered by the examiner. The annotated copy is included herewith.
Claim Rejections - 35 USC § 101
Claims 15 and 18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claim as presently drafted encompasses both transitory and non-transitory embodiments. Specifically, the claims recite “computer readable storage medium” and does not specify as non-transitory, which is broad enough to include transitory forms such as carrier waves or signals. A transitory signal, while physical and real, does not possess concrete structure that would qualify as a device or part under the definition of a machine, is not a tangible article or commodity under the definition of a manufacture (even though it is man-made and physical in that it exists in the real world and has tangible causes and effects), and is not composed of matter such that it would qualify as a composition of matter. As such, a transitory, propagating signal does not fall within any statutory category.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 7 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Particularly, it recites “The method of claim 3, wherein re-projecting the first texture map onto the second texture map using the first texture coordinates and the second texture coordinates comprises…” However, claim 3 does not have mention of “re-projecting.” For purposes of examination, this claim is being interpreted as being dependent on Claim 5 instead, where “re-projecting” is mentioned.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 4 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Specifically, the recited limitation “generating the second texture coordinates comprises generating the second texture coordinates using the decoded vertices positions” does not further limit Claim 1, which this claim depends on. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-11, 13, 15-16, 18 and 23-27 are rejected under 35 U.S.C. 103 as being unpatentable over Cernigliaro et al. (US 20190114821 A1 - IDS REF), hereinafter referred to as Cernigliaro, in view of Ilola et al. (US 20200381022 A1), hereinafter referred to as Ilola.
Regarding Claim 1, Cernigliaro teaches
a method, comprising: generating, for at least one face of a mesh representative of a three dimensional(3D) object,
Cernigliaro P[0006] “is a 3D point on a face of the mesh” “each face… is a triple of indices”
the at least one face comprising vertex positions (Cernigliaro Fig. 20 Geometry contains vertex positions) and first texture coordinates (Cernigliaro Fig. 20 UV Map Creation creates first texture coordinates) associated to the vertex positions in a first texture map (Cernigliaro Fig. 20 Texture Map Creation creates first texture map),
Cernigliaro P[0118] “Once the atlas, or the atlases in case of dynamic content, are created, the 2D images are compressed by a video-capable encoder that is configured to produce a compressed data stream (e.g., compressed bit-stream) that, when received and decoded by a compatible decoder, enables the decoder to reconstruct a decoded version of the 2D atlas. FIG. 20 is a block diagram illustrating operations in the encoding and decoding of an atlas, according to some example embodiments.”
Cernigliaro P[0014] “As used herein, “atlas” refers both to a texture map of charts, and also to a texture map of charts in combination with the UV map that underlies it;”
Cernigliaro P[0010] “In UV mapping, for each 3D point (x, y, z) on the surface , a corresponding 2D point t (u, v) on the texture map is determined” “Then, the 2D point (u, v) corresponding to a point (x, y, z) on a face is obtained by calculating the barycentric coordinates of (x, y, z) with respect to the vertices of the face.”
Examiner Note: P[0010] is included for purposes of definition to show the atlas is associated with the mentioned elements. Fig. 20 describes the use of the atlas (texture map). These mentioned elements are also described in Fig. 20, with the Geometry containing vertex positions, UV Map Creation having the creation of first texture coordinates, and the first texture map being created at Texture Map Creation.
second texture coordinates (Cernigliaro Fig. 20 UV Map Creation (Re-Creation) is creating second texture coordinates) in a second texture map (Cernigliaro Fig. 20 Texture Mapping is generating the second texture map) from decoded vertices positions (Cernigliaro Fig. 20 Geometry Decoder is decoding vertex positions) of the at least one face and decoded topology (Cernigliaro Fig. 20 Geometry Decoder is decoding the topology) of the mesh
Cernigliaro Fig. 20
Examiner Note: Fig. 20 shows UV Map Creation (Re-Creation) (second texture coordinates) comes from Geometry Decoder (decoded vertices positions and decoded topology). UV Map Creation (Re-Creation) then goes into Texture Mapping (second texture map).
obtaining the second texture map (Cernigliaro Fig. 20 Texture Mapping is generating the second texture map) from the first texture map (Cernigliaro Fig. 20 Texture Map Creation creates first texture map) based on the first texture coordinates (Cernigliaro Fig. 20 UV Map Creation creates first texture coordinates) and on the second texture coordinates (Cernigliaro Fig. 20 UV Map Creation (Re-Creation) is creating second texture coordinates),
Cernigliaro Fig. 20
Examiner Note: Fig. 20 shows Texture Mapping (second texture map) coming from Video Decoder, which comes from Video Encoder, which comes from Texture Map Creation (first texture map). Texture Map Creation is based on UV Map Creation (first texture coordinates). Texture Mapping also comes from UV Map Creation (Re-Creation) (second texture coordinates).
encoding the second texture map (Cernigliaro Fig. 20 Texture Mapping is generating the second texture map)
Cernigliaro P[0054] “The 2D image is then processed by a video encoder, such as AVC/H.264 or HEVC/H.265, which generates a compressed binary data stream. The binary data stream can then be stored, communicated, and eventually decoded, such that a decompressed 2D image that contains the original texture information is recreated. At the decoder side (e.g., a client device), every projected area is reassigned to a set of 3D coordinates belonging to the volumetric video to recreate the 3D content. This process is then reproduced, repeating the evaluation of the dominant direction for each N×N 2D block and grouping the 2D blocks together. In this way, the systems and methods discussed herein obtain the 3D location of where to assign the color of each 2D pixel.”
Examiner Note: Fig. 20 shows the Texture Mapping (second texture map) being sent as output for Color For Geometry. As shown in P[0054] it is standard practice for this to be encoded to coloring.
However, Cernigliaro doesn’t teach and encoding an indication of a method used for generating the second texture coordinates.
Ilola teaches and encoding an indication of a method used for generating the second texture coordinates.
Ilola P[0145] “The SchemeTypeBox provides an indication which type of processing is required in the player to process the video.”
Ilola P[0185] “Because of the limitations of the two approaches mentioned above, a third approach is developed. A 3D scene, represented as meshes, points, and/or voxels, can be projected onto one, or more, geometries. These geometries are “unfolded” onto 2D planes (two planes per geometry: one for texture, one for depth), which are then encoded using standard 2D video compression technologies.”
Examiner Note: P[0145] teaches encoding metadata (e.g., SchemaTypeBox and SchemaInformationBox) that indicates required processing at the decoder, thereby teaching encoding and decoding an indication of a processing method. P[0185] extends this teaching to also be applicable to meshes.
Cernigliaro and Ilola are analogous in the art of encoding and decoding video data. It would have
been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include metadata signaling the method used for UV map generation in the system of Cernigliaro, as taught by Ilola, in order to ensure that the decoder applies the correct texture coordinate generation process and to maintain compatibility between encoder and decoder implementations.
Regarding Claim 3, Cernigliaro, in view of Ilola, teaches the method of claim 1, further comprising encoding the topology of the mesh (Cernigliaro Fig. 20 Video Encoder is encoding the topology of the mesh) and the at least one face of the mesh, providing a coded mesh.
Cernigliaro Fig. 20 shows the encoding of the topology of the mesh when the Texture Map is sent to the Video Encoder.
Examiner Note: The face of the mesh is implicit here, since topology is required to have faces. The result is a coded mesh.
Regarding Claim 4, Cernigliaro, in view of Ilola, teaches the method of claim 1, wherein generating the second texture coordinates (Cernigliaro Fig. 20 UV Map Creation (Re-Creation) is creating second texture coordinates) comprises generating the second texture coordinates (Cernigliaro Fig. 20 UV Map Creation (Re-Creation) is creating second texture coordinates) using the decoded vertices positions (Cernigliaro Fig. 20 Geometry Decoder is decoding vertex positions).
Cernigliaro Fig. 20
Examiner Note: The Geometry Decoder decodes the vertices positions, and those are used for the UV Map Creation (Re-Creation) which correspond to the second texture coordinates.)
Regarding Claim 5, Cernigliaro, in view of Ilola, teaches the method of claim 1, wherein obtaining a second texture map (Cernigliaro Fig. 20 Texture Mapping is generating the second texture map) from the first texture map (Cernigliaro Fig. 20 Texture Map Creation creates first texture map) based on the first texture coordinates (Cernigliaro Fig. 20 UV Map Creation creates first texture coordinates) and on the second texture coordinates (Cernigliaro Fig. 20 UV Map Creation (Re-Creation) is creating second texture coordinates) comprises re-projecting the first texture map (Cernigliaro Fig. 20 Texture Map Creation creates first texture map) onto the second texture map (Cernigliaro Fig. 20 Texture Mapping is generating the second texture map) using the first texture coordinates (Cernigliaro Fig. 20 UV Map Creation creates first texture coordinates) and the second texture coordinates (Cernigliaro Fig. 20 UV Map Creation (Re-Creation) is creating second texture coordinates).
Cernigliaro P[0126] “The colors are then assigned to the corresponding area of the 3D surface by projecting them according the dominant directions calculated at the decoder side.”
Examiner Note: Additionally, shown in Fig. 20 is the UV Map Creation (first texture coordinates) being passed through the Texture Map Creation (first texture map) and being encoded, then decoded to be used for Texture mapping (second texture map), as well as the UV Map Creation (Re-Creation) sending the second texture coordinates down to be mapped as well in Texture Mapping.
Regarding Claim 6, Cernigliaro, in view of Ilola, teaches the method of claim 1, further comprising encoding metadata relating to obtaining the second texture map (Cernigliaro Fig. 20 Texture Mapping is generating the second texture map).
Cernigliaro P[0130] “In some example embodiments of the systems and methods discussed herein, decoding proceeds according to an alternative decoding process that does not require the evaluation of the atlas mapping at the decoder side. Instead, the size and the positions, in both UV coordinates and 3D coordinates of the surface, of each sub-image are transmitted as supplemental information (e.g., side information or other metadata) together with the compressed colors.”
Regarding Claim 7, Cernigliaro, in view of Ilola, teaches the method of claim 3, wherein re-projecting the first texture map (Cernigliaro Fig. 20 Texture Map Creation creates first texture map) onto the second texture map (Cernigliaro Fig. 20 Texture Mapping is generating the second texture map) using the first texture coordinates (Cernigliaro Fig. 20 UV Map Creation creates first texture coordinates) and the second texture coordinates (Cernigliaro Fig. 20 UV Map Creation (Re-Creation) is creating second texture coordinates) comprises identifying for at least one decoded face of the coded mesh a corresponding face in the mesh before encoding.
In Cernigliaro Fig. 20, the Geometry Decoder (decoded topology) is reconstructed using the faces of the original mesh (output of Video Decoder), thereby inherently identifying the corresponding faces between the encoded and decoded mesh representations. Identifying corresponding faces is simply the result of decoding the topology, and is a necessary step in re-projection.
Regarding Claim 2, it is an apparatus claim (Cernigliaro P[0158] shows processors) that recites similar limitations to Claim 1, therefore rejected under similar rationale.
Regarding Claim 26, it is an apparatus claim (Cernigliaro P[0158] shows processors) that recites similar limitations to Claim 5, therefore rejected under similar rationale.
Regarding Claim 27, it is an apparatus claim (Cernigliaro P[0158] shows processors) that recites similar limitations to Claim 6, therefore rejected under similar rationale.
Regarding Claim 8, Cernigliaro teaches a method, comprising: decoding a topology of a mesh representative of a three dimensional (3D) object, and at least one face of the mesh, (Cernigliaro Fig. 20 Geometry Decoder decoding the Geometry Bit Stream) the at least one face comprising vertex positions,
Cernigliaro P[0010] “In UV mapping, for each 3D point (x, y, z) on the surface , a corresponding 2D point t (u, v) on the texture map is determined” “Then, the 2D point (u, v) corresponding to a point (x, y, z) on a face is obtained by calculating the barycentric coordinates of (x, y, z) with respect to the vertices of the face.”
and generating the texture coordinates for vertices of the at least one face (Cernigliaro Fig. 20 UV Map Creation (Re-Creation) generates texture coordinates) based on the decoded topology (Cernigliaro Fig. 20 Geometry Decoder decoding the Geometry Bit Stream) and decoded vertex positions (Cernigliaro Fig. 20 Geometry Decoder decoding the Geometry Bit Stream).
Cernigliaro Fig. 20
Examiner Note: UV Map Creation (Re-Creation) generates texture coordinates based decoded geometry information from Geometry Decoder.
However, Cernigliaro doesn’t teach decoding an indication of a method used for generating texture coordinates and generating the texture coordinates based on the indication.
Ilola teaches decoding an indication of a method used for generating texture coordinates
Ilola P[0145] “The SchemeTypeBox provides an indication which type of processing is required in the player to process the video.”
Ilola P[0185] “Because of the limitations of the two approaches mentioned above, a third approach is developed. A 3D scene, represented as meshes, points, and/or voxels, can be projected onto one, or more, geometries. These geometries are “unfolded” onto 2D planes (two planes per geometry: one for texture, one for depth), which are then encoded using standard 2D video compression technologies.”
Examiner Note: P[0145] teaches encoding metadata (e.g., SchemaTypeBox and SchemaInformationBox) that indicates required processing at the decoder, thereby teaching encoding and decoding an indication of a processing method. P[0185] extends this teaching to also be applicable to meshes.
And generating the texture coordinates based on the indication
Ilola P[0145] “Players not recognizing or not capable of processing the required actions are stopped from decoding or rendering the restricted video tracks.”
Examiner Note: This quote is explaining the purpose of the metadata with an indication of a method. This shows that the metadata (previously cited with P[0145] and P[0185]) is to be used as an indication of what type of method to be used when processing the data in future use.
Cernigliaro and Ilola are analogous in the art of encoding and decoding video data. It would have
been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include such metadata taught by Ilola in the system of Cernigliaro to indicate the method used for generating texture coordinates, thereby enabling the decoder to apply the correct UV generation process and ensure proper decoding and rendering.
Regarding Claim 10, Cernigliaro, in view of Ilola, teaches the method of claim 8, further comprising; decoding a texture map representative of texture data associated to the mesh, and rendering the 3D object using at least generated texture coordinates and the decoded texture map.
Cernigliaro Fig. 20
Examiner Note: Video Decoder decodes a texture map representative of texture data. Texture Mapping is applying the texture map to geometry using UV coordinates, producing colored 3D geometry, which is functionally rendering. Texture Mapping has both the UV Map Creation (Re-Creation), which is the generated texture coordinates, and the decoded Texture Map as inputs.
Regarding Claim 11, Cernigliaro, in view of Ilola, teaches the method of claim 8, wherein topology and vertices positions are decoded from a bitstream.
Cernigliaro Fig. 20 shows the Geometry Bit Stream being decoded in Geometry Decoder, giving vertices positions and topology.
Regarding Claim 13, Cernigliaro fails to teach the method of claim 8, further comprising decoding an indication indicating to obtain texture coordinates for vertices of the at least one face based on the decoded topology and decoded vertex positions.
Ilola teaches decoding an indication indicating to obtain texture coordinates for vertices of the at least one face based on the decoded topology and decoded vertex positions.
Ilola P[0145] “The SchemeTypeBox provides an indication which type of processing is required in the player to process the video.”
Ilola P[0185] “Because of the limitations of the two approaches mentioned above, a third approach is developed. A 3D scene, represented as meshes, points, and/or voxels, can be projected onto one, or more, geometries. These geometries are “unfolded” onto 2D planes (two planes per geometry: one for texture, one for depth), which are then encoded using standard 2D video compression technologies.”
Examiner Note: P[0145] teaches encoding metadata (e.g., SchemaTypeBox and SchemaInformationBox) that indicates required processing at the decoder, thereby teaching encoding and decoding an indication of a processing method. P[0185] extends this teaching to also be applicable to meshes.
Cernigliaro and Ilola are analogous in the art of encoding and decoding video data. It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to include such metadata taught by Ilola in the system of Cernigliaro to indicate the method used for generating texture coordinates, thereby enabling the decoder to apply the correct UV generation process and ensure proper decoding and rendering.
Regarding Claim 15, Cernigliaro, in view of Ilola, teaches a computer readable storage medium having stored thereon instructions for causing one or more processors to perform the method of claim 8.
(Cernigliaro P[0048] “FIG. 26 is a block diagram illustrating components of a machine (e.g., device), according to some example embodiments, able to read instructions from a machine-readable medium and perform any one or more of the methodologies discussed herein.”)
Regarding Claim 9, it is an apparatus claim (Cernigliaro P[0158] shows processors) that recites similar limitations to Claim 8, therefore rejected under similar rationale.
Regarding Claim 16, Cernigliaro, in view of Ilola, teaches the apparatus according to claim 9 comprising at least one of (i) an antenna configured to receive a signal, the signal including data representative of at least one part of a 3D object, (ii) a band limiter configured to limit the received signal to a band of frequencies that includes the data representative of the at least one part of the 3D object, or (iii) a display configured to display the at least one part of the 3D object.
(Cernigliaro P[0155] mentions “the machine may further include a graphics display… capable of displaying graphics or video.” This meets the limitation of a display configured to display the 3D object.)
Regarding Claim 23, it is an apparatus claim (Cernigliaro P[0158] shows processors) that recites similar limitations to Claim 10, therefore rejected under similar rationale.
Regarding Claim 24, it is an apparatus claim (Cernigliaro P[0158] shows processors) that recites similar limitations to Claim 11, therefore rejected under similar rationale.
Regarding Claim 25, it is an apparatus claim (Cernigliaro P[0158] shows processors) that recites similar limitations to Claim 13, therefore rejected under similar rationale.
Regarding Claim 18, Cernigliaro teaches a computer readable storage medium
Cernigliaro P[0152] “FIG. 26 is a block diagram illustrating components of a machine 2600, according to some example embodiments, able to read instructions 2624 from a machine-readable medium 2622 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part.”
having stored thereon a bitstream comprising: coded video data representative of a topology of a mesh, of at least one face of the mesh,
Cernigliaro Fig. 20
Examiner’s Note: Video Encoder encodes the texture map from Texture Map Creation, giving coded video data. This encoded texture map corresponds to the faces of the mesh via UV Map Creation, which comes from Geometry Encoder (which gives the topology). Therefore, the coded video data is representative of the topology.
the at least one face comprising vertex positions,
Cernigliaro P[0010] “In UV mapping, for each 3D point (x, y, z) on the surface , a corresponding 2D point t (u, v) on the texture map is determined” “Then, the 2D point (u, v) corresponding to a point (x, y, z) on a face is obtained by calculating the barycentric coordinates of (x, y, z) with respect to the vertices of the face.”
However, Cernigliaro doesn’t teach coded data representative of an indication indicating a decoder to generate texture coordinates for vertices of the at least one face based on decoded topology and decoded vertex positions, and metadata relating to a method used for generating the texture coordinates.
Ilola teaches coded data representative of an indication indicating a decoder to generate texture coordinates for vertices of the at least one face based on decoded topology and decoded vertex positions, and metadata relating to a method used for generating the texture coordinates.
Ilola P[0145] “The SchemeTypeBox provides an indication which type of processing is required in the player to process the video.”
Ilola P[0185] “Because of the limitations of the two approaches mentioned above, a third approach is developed. A 3D scene, represented as meshes, points, and/or voxels, can be projected onto one, or more, geometries. These geometries are “unfolded” onto 2D planes (two planes per geometry: one for texture, one for depth), which are then encoded using standard 2D video compression technologies.”
Examiner Note: P[0145] teaches encoding metadata, which is coded data, (e.g., SchemaTypeBox and SchemaInformationBox) that indicates required processing at the decoder, thereby teaching encoding and decoding an indication of a processing method. P[0185] extends this teaching to also be applicable to meshes.
Cernigliaro and Ilola are analogous in the art of encoding and decoding video data. It would have
been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to modify Cernigliaro to include coded data that indicates to the decoder how to generate texture coordinates, along with metadata identifying the method used, as taught by Ilola, in order to ensure consistent and accurate reconstruction of texture mapping at the decoder.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID W SOON whose telephone number is (571)272-8113. The examiner can normally be reached M-F 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID W SOON/ Examiner, Art Unit 2615
/ALICIA M HARRINGTON/ Supervisory Patent Examiner, Art Unit 2615