Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. The United States Patent & Trademark Office appreciates the application that is by the inventor/assignee. The United States Patent & Trademark Office reviewed the following application and has made the following comments below.
Priority
3. Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2023-0037178, filed on March 22, 2023.
Information Disclosure Statement
4. The information disclosure statement (IDS) submitted on 3/20/2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
5. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
6. Claims 1, 4, 5, 8, 11, and 12 are rejected under 35 U.S.C 101 because the claimed invention is directed to non-statutory subject matter. When reviewing independent claim 1, and based upon consideration of all of the relevant factors with respect to the claim as a whole, claims 1 & 8 are held to claim an abstract idea without reciting elements that amount to significantly more than the abstract idea and is therefore rejected as ineligible subject matter under 35 U.S.C. 101. The Examiner will analyze Claim 1 and similar rationale applies to independent Claim 9. The rationale, under MPEP § 2106, for this finding is explained below:
7. The claimed invention (1) must be directed to one of the four statutory categories, and (2) must not be wholly directed to subject matter encompassing a judicially recognized exception, as defined below. The following two step analysis is used to evaluate these criteria.
8. Step 1: Is the claim directed to one of the four patent-eligible subject matter categories: process, machine, manufacture, or composition of matter?
9. When examining the claim under 35 U.S.C. 101, the Examiner interprets that the claims are related to a process since the claim is directed to a visual-linguistic feature fusion method.
10. Step 2A, Prong 1: Does the claim wholly embrace a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception?
11. The Examiner interprets that the judicial exception applies since Claim 1 limitations of generating a linguistic feature, generating a visual feature, and generating a fused feature using an attention technique are directed to the abstract idea of mental processes. The claim is related to mental processes relationship by stating generating a linguistic feature, visual feature, and fused feature. The Claim 1 limitation of the generating features and the fusion of features algorithm which involves collecting information, analyzing it, and displaying results of the collection and analysis. Since this consist of data analysis steps being recited at a high level of generality such that they could practically be performed in the human mind, it is directed to an abstract idea.
12. Step 2A, Prong 2: Does the claim recite additional elements that int grate the judicial exception into a practical application.
13. The Examiner interprets that Claim 1 limitations do not provide additional elements or combination of additional elements to a practical application since the claims are adding insignificant extra-solution activity to the judicial exception. See, MPEP §2106.04(a), Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (mental processes) to another abstract idea (encoding and decoding) does not render the claim non-abstract").
14. Step 2B: If a judicial exception into a practical application is not recited in the claim, the Examiner must interpret if the claim recites additional elements that amount to significantly more than the judicial exception.
15. The Examiner interprets that the Claims do not amount to significantly more since the Claims recite simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high-level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984.
16. Furthermore, the generic computer components of the computer system recited as performing generic computer function that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system.
Claims 4, 5, 8, 11, and 12 recite the same abstract idea and therefore are not drawn to the eligible subject matter as they are directed to the abstract idea without significantly more.
Therefore, the Examiner interprets that the clams are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
17. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
18. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
19. Claims 1-4 and 8-11 are rejected under 35 U.S.C 103(a) as being unpatentable over Omote et al. (US Patent Pub No. 2019/0341025 A1, hereafter referred to as Omote) in view of Yuan et al. (US Patent Pub No. 12518512 B2, hereafter referred to as Yuan).
20. Regarding Claim 1, Omote teaches a visual-linguistic feature fusion method (paragraphs 28 and 30, Omote teaches a system for two types of multimodal processing, feature fusion processing and decision fusion processing, that use inputs such as audio and video or video and text or text and audio. The Examiner interprets feature fusion processing using inputs of video and text as a visual-linguistic feature fusion method.), comprising: generating a linguistic feature using a text encoder based on text (Elements 711 and 708 in Fig. 7, paragraphs 29 and 66, Omote teaches performing linguistic feature analysis on text where linguistic feature analysis includes the generation of a linguistic feature where the text encoding is preceded directly by a text input such as an image caption, where the feature vectors for the text are generated.),
PNG
media_image1.png
792
988
media_image1.png
Greyscale
and generating a visual feature using a video encoder based on a video frame; (Elements 711, 702, 703 in Fig. 7, paragraphs 32, 38, and 70, Omote teaches video feature vectors being generated from input modes such as videos, where video feature information is extracted, a part of encoding, and raw video frames are used for feature fusion.).
PNG
media_image2.png
463
549
media_image2.png
Greyscale
Omote does not teach generating a fused feature of the linguistic feature and the visual feature using an attention technique based on the linguistic feature and the visual feature.
Yuan is in the same field of art of feature extraction and fusion using a visual-linguistic model. Further Yuan teaches generating a fused feature of the linguistic feature and the visual feature using an attention technique based on the linguistic feature and the visual feature (col 1 lines 48-53, col 8 lines 15-19, col 9 lines 31- 36 and col 9 lines 45-56, Yuan teaches a method for the generation of image-text fused representation outputs that includes language being encoded, using attention determining techniques such as a hierarchical vision transformer and convolutional embedding based on encoded images and language, of text descriptions from the image-text pairs, and cross attention to determine a local attention. The Examiner interprets the use of hierarchal vision transformation to determine a local attention as using an attention technique based on the linguistic feature and the visual feature.).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Omote by adding the attention technique that is taught by Yuan to make the invention focus on the most relevant information enabling superior feature fusion performance; thus, one of ordinary skill in the art would be motivated to combine the references since they are both in the field of feature extraction and fusion using a visual-linguistic model (col 1 lines 48-53, col 8 lines 15-19, col 9 lines 31- 36 and col 9 lines 45-56, Yuan)..
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
21. In regards to Claim 2, Omote in view of Yuan teaches wherein the attention technique
includes cross-attention (col 9 lines 31-37, Yuan teaches fusion model learning the contextual representation with a transformer network based on cross attention.).
22. In regards to Claim 3, Omote in view of Yuan teaches wherein the attention technique
includes cross-attention and self-attention (col 8 line 15-col 9 line 5, Yuan teaches several attention mechanisms such as a unified attention mechanism and three attention layers being used including cross-attention and self-attention, in the transformer-based models for the image-text fused representation outputs obtained by the pre-training model. The Examiner interprets this as the attention technique including cross-attention and self-attention.).
23. In regards to Claim 4, Omote in view of Yuan teaches wherein the generating of the fused feature includes generating a new fused feature using the attention technique based on the fused feature (Fig. 1A, 1B, and 2, paragraph 34, Omote teaches several different feature vectors generated on a per sentence basis, that are used to create a single feature vector referred to as a fusion vector, which is then used for the multimodal neural network method of feature fusion using audio attention features. The Examiner interprets the generation of several different feature vectors to generate the fusion vector in order to generate the multimodal fused feature as the generating a fused feature that includes generating a new fused feature using attention techniques.).
[AltContent: rect]
PNG
media_image3.png
524
769
media_image3.png
Greyscale
[AltContent: rect]
PNG
media_image4.png
474
770
media_image4.png
Greyscale
[AltContent: rect][AltContent: rect]
PNG
media_image5.png
643
458
media_image5.png
Greyscale
24. Regarding Claim 8, Omote teaches a visual-linguistic feature fusion system comprising:
a memory configured to store computer-readable instructions; and at least one processor configured to execute the instructions, wherein the at least one processor is configured to execute the instructions to generate a linguistic feature using a text encoder based on text (Elements 711 and 708 in Fig. 7, paragraphs 29 and 66, Omote teaches performing linguistic feature analysis on text where linguistic feature analysis includes the generation of a linguistic feature where the text encoding is preceded directly by a text input such as an image caption, where the feature vectors for the text are generated.), and generate a visual feature using a video encoder based on a video frame (Elements 711, 702, 703 in Fig. 7, paragraphs 32, 38, and 70, Omote teaches video feature vectors being generated from input modes such as videos, where video feature information is extracted, a part of encoding, and raw video frames are used for feature fusion.).
Omote does not teach generate a fused feature of the linguistic feature and the visual feature using an attention technique based on the linguistic feature and the visual feature.
Yuan is in the same field of art of feature extraction and fusion using a visual-linguistic model. Further Yuan teaches generate a fused feature of the linguistic feature and the visual feature using an attention technique based on the linguistic feature and the visual feature (col 1 lines 48-53, col 8 lines 15-19, col 9 lines 31- 36 and col 9 lines 45-56, Yuan teaches a method for the generation of image-text fused representation outputs that includes language being encoded, using attention determining techniques such as a hierarchical vision transformer and convolutional embedding based on encoded images and language, of text descriptions from the image-text pairs, and cross attention to determine a local attention. The Examiner interprets the use of hierarchal vision transformation to determine a local attention as using an attention technique based on the linguistic feature and the visual feature.).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Omote by adding an attention technique that is taught by Yuan to make the invention focus on the most relevant information enabling superior feature fusion performance; thus, one of ordinary skill in the art would be motivated to combine the references since they are both in the field of feature extraction and fusion using a visual-linguistic model (col 1 lines 48-53, col 8 lines 15-19, col 9 lines 31- 36 and col 9 lines 45-56, Yuan).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
25. In regards to Claim 9, Omote in view of Yuan teaches wherein the attention technique
includes cross-attention (col 9 lines 31-37, Yuan teaches fusion model learning the contextual representation with a transformer network based on cross attention.).
26. In regards to Claim 10, Omote in view of Yuan teaches wherein the attention technique
includes cross-attention and self-attention (col 8 line 15-col 9 line 5, Yuan teaches several attention mechanisms such as a unified attention mechanism and three attention layers being used including cross-attention and self-attention, in the transformer-based models for the image-text fused representation outputs obtained by the pre-training model. The Examiner interprets this as the attention technique including cross-attention and self-attention.).
27. In regards to Claim 11, Omote in view of Yuan teaches wherein the generating of the fused feature includes generating a new fused feature using the attention technique based on the fused feature (Fig. 1A, 1B, and 2, paragraph 34, Omote teaches several different feature vectors generated on a per sentence basis, that are used to create a single feature vector referred to as a fusion vector, which is then used for the multimodal neural network method of feature fusion using audio attention features. The Examiner interprets the generation of several different feature vectors to generate the fusion vector in order to generate the multimodal fused feature as the generating a fused feature that includes generating a new fused feature using attention techniques.).
28. Claims 6 and 13 are rejected under 35 U.S.C 103(a) as being unpatentable over Omote et al. (US Patent Pub No. 2019/0341025 A1, hereafter referred to as Omote) in view of Yuan et al. (US Patent Pub No. 12518512 B2, hereafter referred to as Yuan) in further view of Mao et al. (US Patent Pub No. 2025/0104453 A1, hereafter referred to as Mao).
29. Regarding Claim 6, Omote in view of Yuan teaches the visual-linguistic feature fusion method of Claim 2 using an attention technique that includes cross attention.
Omote in view of Yuan does not teach wherein, in the generating of the fused feature, the cross-attention is performed after setting one of the linguistic feature and the visual feature as a giving feature, setting the other feature as a receiving feature, setting the receiving feature as a query of the cross-attention, and setting the giving feature as a key and a value of the cross-attention.
Mao is in the same field of art of feature extraction and fusion using a visual-linguistic model. Further Mao teaches wherein, in the generating of the fused feature, the cross-attention is performed after setting one of the linguistic feature and the visual feature as a giving feature (Fig. 5, paragraphs 9, 28, 39, 65, and 74, Mao teaches a visual-linguistic model that performed feature fusion using a transformer-based system to generate descriptions and fuse extracted features to ensure generated text matches the target object in the form of cross attention, where one modality acts as the source to provide context for the second modality. The Examiner interprets the source that provides context for the second modality as the giving feature.), setting the other feature as a receiving feature (Fig. 4, paragraphs 55 and 60, Mao teaches an N-layer decoder structure as the receiving end of visual information being processed through a 4-layer convolutional neural network. The Examiner interprets this as setting the other feature as a receiving feature.), setting the receiving feature as a query of the cross-attention (Fig. 4 and 5, paragraphs 8-9, 28, 38-39, and 55, Mao teaches an N-layer decoder structure for extracting and vectorizing labels and text features that serve as queries in the cross-attention to fuse extracted features to ensure generated text matches the target object.), and setting the giving feature as a key and a value of the cross-attention (Fig. 4 and 5, paragraph 9, 39, 55, and 74, Mao teaches a four layer convolutional neural network and an N-layer encoder, where the output of the encoder, known as the conformer, is converted into key and value matrices, where the matrices represent the giving information that the N-layer decoder will receive and use for the fusion process with cross-attention.).
[AltContent: rect]
PNG
media_image6.png
520
1132
media_image6.png
Greyscale
[AltContent: ][AltContent: rect]
PNG
media_image7.png
494
838
media_image7.png
Greyscale
Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Omote by adding an attention technique that uses a metadata mapping with key object selection that is taught by Mao to make the invention facilitate efficient selective integration and modulation of the visual and linguistic features in the fusion process; thus, one of ordinary skill in the art would be motivated to combine the references since they are in the field of feature extraction and fusion using a visual-linguistic model (fig. 4 and 5, paragraphs 8-9. 28, 38-39, 55, 60, 74, Mao).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
30. Regarding Claim 13, Omote in view of Yuan teaches the visual-linguistic feature system method of Claim 9 using an attention technique that includes cross attention.
Omote in view of Yuan does not teach wherein, in the generating of the fused feature, the cross-attention is performed after setting one of the linguistic feature and the visual feature as a giving feature, setting the other feature as a receiving feature, setting the receiving feature as a query of the cross-attention, and setting the giving feature as a key and a value of the cross-attention.
Mao is in the same field of art of feature extraction and fusion using a visual-linguistic model. Further Mao teaches wherein, in the generating of the fused feature, the cross-attention is performed after setting one of the linguistic feature and the visual feature as a giving feature (Fig. 5, paragraphs 9, 28, 39, 65, and 74, Mao teaches a visual-linguistic model that performed feature fusion using a transformer-based system to generate descriptions and fuse extracted features to ensure generated text matches the target object in the form of cross attention, where one modality acts as the source to provide context for the second modality. The Examiner interprets the source that provides context for the second modality as the giving feature.), setting the other feature as a receiving feature (Fig. 4, paragraphs 55 and 60, Mao teaches an N-layer decoder structure as the receiving end of visual information being processed through a 4-layer convolutional neural network. The Examiner interprets this as setting the other feature as a receiving feature.), setting the receiving feature as a query of the cross-attention (Fig. 4 and 5, paragraphs 8-9, 28, 38-39, and 55, Mao teaches an N-layer decoder structure for extracting and vectorizing labels and text features that serve as queries in the cross-attention to fuse extracted features to ensure generated text matches the target object.), and setting the giving feature as a key and a value of the cross-attention (Fig. 4 and 5, paragraph 9, 39, 55, and 74, Mao teaches a four layer convolutional neural network and an N-layer encoder, where the output of the encoder, known as the conformer, is converted into key and value matrices, where the matrices represent the giving information that the N-layer decoder will receive and use for the fusion process with cross-attention.).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Omote by adding an attention technique that uses a metadata mapping with key object selection that is taught by Mao to make the invention facilitate efficient selective integration and modulation of the visual and linguistic features in the fusion process; thus, one of ordinary skill in the art would be motivated to combine the references since they are in the field of feature extraction and fusion using a visual-linguistic model (fig. 4 and 5, paragraphs 8-9. 28, 38-39, 55, 60, 74, Mao).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
31. Claims 7 and 14 are rejected under 35 U.S.C 103(a) as being unpatentable over Omote et al. (US Patent Pub No. 2019/0341025 A1, hereafter referred to as Omote) in view of Yuan et al. (US Patent Pub No. 12518512 B2, hereafter referred to as Yuan) in further view of Mao et al. (US Patent Pub No. 2025/0104453 A1, hereafter referred to as Mao) furthermore in view of Chiu et al. (US Patent Pub No. 12361732 B2, hereafter referred to as Chiu).
32. Regarding Claim 7, Omote in view of Yuan in further view of Mao teaches the visual-linguistic feature fusion method of Claim 6 using the performance of cross attention with a giving feature and receiving feature, where the features are set as a key/value and a query of the cross attention.
Omote in view of Yuan in further view of Mao does not teach wherein, in the generating of the fused feature, the fused feature is generated by inputting an inner product of the query, and the key to a Softmax function to calculate a weight and multiplying the calculated weight by the value and then adding the value multiplied by the weight to the receiving feature.
Chiu is in the same field of art of feature extraction and fusion using a visual-linguistic model. Further Chiu teaches wherein, in the generating of the fused feature, the fused feature is generated by inputting an inner product of the query (Fig. 6 and 7, Col 9 lines 55-67, col 10 lines 22-30, col 11 lines 34-45, Chiu teaches fusion that combines formation models such as a visual perception model and egocentric semantic map to provide an input for proximal policy optimization learning agent, as well as an egocentric map attention and a map transformer to generate features using attention layers where a query is multiplied by an inner product layer to determine weights.),
[AltContent: rect][AltContent: rect][AltContent: ]
PNG
media_image8.png
607
975
media_image8.png
Greyscale
[AltContent: ]
PNG
media_image9.png
728
698
media_image9.png
Greyscale
and the key to a Softmax function to calculate a weight and multiplying the calculated weight by the value and then adding the value multiplied by the weight to the receiving feature (Fig. 7, col 9 lines 55-67 and col 10 lines 22-30, Chiu teaches the attention mechanism of the map transferring the scaled dot-product attention method, which includes inputting the query and key to a SoftMax or probabilistic function to calculate a weight, multiplying calculated weight by a value and adding that result to the receiving feature.).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Omote by adding attention techniques of Yuan, key object selection of Mao, and scaled dot-product attention mechanism that is taught by Chiu to make the invention focus on specific parts of its input when producing the fused output; thus, one of ordinary skill in the art would be motivated to combine the references since they are in the field of feature extraction and fusion using a visual-linguistic model (fig. 6 and 7, col 9 lines 55-67, col 10 lines 22-30, col 11 lines 34-45, Chiu).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
33. Regarding Claim 14, Omote in view of Yuan in further view of Mao teaches the visual-linguistic feature fusion system of Claim 13 using the performance of cross attention with a giving feature and receiving feature, where the features are set as a key/value and a query of the cross attention.
Omote in view of Yuan in further view of Mao does not teach wherein, in the generating of the fused feature, the fused feature is generated by inputting an inner product of the query, and the key to a Softmax function to calculate a weight and multiplying the calculated weight by the value and then adding the value multiplied by the weight to the receiving feature.
Chiu is in the same field of art of feature extraction and fusion using a visual-linguistic model. Further Chiu teaches wherein, in the generating of the fused feature, the fused feature is generated by inputting an inner product of the query (Fig. 6 and 7, Col 9 lines 55-67, col 10 lines 22-30, col 11 lines 34-45, Chiu teaches fusion that combines formation models such as a visual perception model and egocentric semantic map to provide an input for proximal policy optimization learning agent, as well as an egocentric map attention and a map transformer to generate features using attention layers where a query is multiplied by an inner product layer to determine weights.), and the key to a Softmax function to calculate a weight and multiplying the calculated weight by the value and then adding the value multiplied by the weight to the receiving feature (Fig. 7, col 9 lines 55-67 and col 10 lines 22-30, Chiu teaches the attention mechanism of the map transferring the scaled dot-product attention method, which includes inputting the query and key to a SoftMax or probabilistic function to calculate a weight, multiplying calculated weight by a value and adding that result to the receiving feature.).
Therefore it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Omote by adding attention techniques of Yuan, key object selection of Mao, and scaled dot-product attention mechanism that is taught by Chiu to make the invention focus on specific parts of its input when producing the fused output; thus, one of ordinary skill in the art would be motivated to combine the references since they are in the field of feature extraction and fusion using a visual-linguistic model (fig. 6 and 7, col 9 lines 55-67, col 10 lines 22-30, col 11 lines 34-45, Chiu).
Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Conclusion
34. Any inquiry concerning this communication or earlier communications from the
examiner should be directed to LOUIS NWUHA whose telephone number is (571)272-0219.
The examiner can normally be reached Monday to Friday 9 am to 5 pm.
35. Examiner interviews are available via telephone, in-person, and video conferencing using
a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is
encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
36. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Oneal Mistry can be reached at 3134464912. The fax phone number for the
organization where this application or proceeding is assigned is 571-273-8300.
37. Information regarding the status of published or unpublished applications may be
obtained from Patent Center. Unpublished application information in Patent Center is available
to registered users. To file and manage patent submissions in Patent Center, visit:
https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more
information about Patent Center and https://www.uspto.gov/patents/docx for information about
filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC)
at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service
Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LOUIS NWUHA/Examiner, Art Unit 2674
/ONEAL R MISTRY/Supervisory Patent Examiner, Art Unit 2674