DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see Remarks pages 6-7, filed 9/11/2025, with respect to the rejection(s) of claim(s) 1-2 and 5-17 under 35 U.S.C. 112(a) have been fully considered and are persuasive. The rejection(s) of claim(s) 1-2 and 5-17 have been withdrawn.
Applicant' s arguments, see Remarks pages 7-11, filed 9/11/2025, with respect to the rejection(s) of claim(s) 1 and 10-11 under 35 U.S.C. 103 have been fully considered but they are not persuasive.
On page 10 of Remarks, Applicant argues:
PNG
media_image1.png
609
775
media_image1.png
Greyscale
Examiner respectfully disagrees.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
As has been disclosed in previous Office Actions, and further expressed below, Paragraphs 0105-0106 of Oh discloses the recognition of finger language, from an extracted posture of a speaker, in units of syllables. Paragraph 0032 of Lim discloses the use of artificial intelligence to extract sign language movements performed by a user and translate them into text. And, paragraphs 0008-0009 of Xu disclose the training of a sign language recognition model using sign language data and skeleton data. Thus, Oh in view of Lim and Xu discloses “recognize a finger language of the speaker from the extracted posture information of the speaker in units of syllables, by using an Al model which receives, as an input, the posture information of the speaker, and to output a text, wherein the one or more processors are configured to train the Al model using training data.”
In addition, Paragraph 0032 of Li discloses “Given a real video-text pair, first replace, insert or delete a word in the real text, and repeat the operation multiple times; the new word inserted or replaced is randomly selected from the vocabulary in the training set; at the same time, perform corresponding operations on the real video according to the alignment label obtained in the iterative optimization stage, and the alignment label is the alignment label between the real video and the corresponding sign language word sequence obtained by decoding; then perform k editing operations, each operation is randomly selected from replacement, insertion and deletion, and at the same time, k is randomly selected from [1, K], K is the upper limit of the number of editing operations; through the above method, a number of pseudo video-text pairs that meet the requirements is generated, and the specific number can be set according to actual conditions.”
Wherein training data is generated by augmenting the words present in a sign language sequence, and aligning the words within the augmented sequence to their corresponding video segments.
Therefore, it would have been obvious to one of ordinary skill in the art, prior to the effective filing data of the claimed invention, to implement the algorithms for generating training data based on the augmentation of words in a sign language video and aligning them to their corresponding video segments taught by Li for the generation of training data, by augmenting syllables in sign language words, for the training of the finger language recognition system disclosed by Oh in view of Lim and Xu.
In addition, Page 1, Column 2, Paragraph 3 to Page 3, Column 1, Paragraph 5 of Wei discloses “For a given sentence in the training set, we randomly choose and perform one of the following operations…Random Swap (RS): Randomly choose two words in the sentence and swap their positions. Do this n times…Furthermore, for each original sentence, we generate
n
a
u
g
augmented sentences.” Wherein for the generation of training data, training data sentences are augmented by swapping the positions of words in the sentences.
Therefore, it would have been obvious to one of ordinary skill in the art, prior to the effective filing data of the claimed invention, to implement the algorithms for generating training data based on the random swapping of words in a sentence taught by Wei for the generation of training data by random swapping sign language syllables in a word and aligning the syllables to their corresponding posture information disclosed by disclosed by Oh in view of Lim, Xu, and Li.
In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, been Page 5, Column 2, Paragraph 3 of Wei discloses “We have shown that simple data augmentation operations can boost performance on text classification tasks. Although improvement is at times marginal, EDA substantially boosts performance and reduces overfitting when training on smaller datasets” wherein the inclusion of training data augmentation techniques, including Random Swap, serve to improve model performance.
Thus, since prior art Oh in view of Lim and Xu discloses, in paragraphs 0105-0106 of Oh, the recognition of finger language in units of syllables, and paragraph 0032 of Li discloses the generation of training data by augmenting sign language text and aligning the augmented text to its corresponding video, then combination of Oh in view of Lim and Xu with Li, as discussed in arguments above and in the rejection of claim 1 under 35 U.S.C. 101 below, discloses the generation of training data by augmenting sign language syllables and aligning the augmented syllables to their corresponding video segments. In addition, Page 1, Column 2, Paragraph 3 to Page 3, Column 1, Paragraph 5 of Wei disclose the generation of training data by random swapping words in a sentence. Thus, the combination of Oh in view of Lim, Xu, and Li with Wei, as discussed in arguments above and in the rejection of claim 1 under 35 U.S.C. 101 below, discloses the generation of training data by changing the order of syllables in a finger language word and aligning the changed syllables to their corresponding video segments.
Therefore, Oh in view of Lim, Xu, Li, and Wei discloses “augment the training data by processing data of the finger language training videos into the training data for training the Al model, using the extracted posture information, including changing an order of syllables forming a finger language word and generating virtual training data by combining matched posture information which is matched to a result of the changing the order of syllables.”
As per claim(s) 10-11, arguments made in rejecting claim(s) 1 are analogous.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 6, 8-12, 14, and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oh et al. (Foreign Publication Number KR20150045335A) hereinafter referenced as Oh, in view of Lim (Foreign Publication Number KR20210018028A) hereinafter referenced as Lim, Xu et al. (Foreign Publication Number CN113221663A) hereinafter referenced as Xu, Li et al. (Foreign Publication Number: CN112149603A), hereinafter referenced as Li, and Wei et al. (Wei, J et al. “EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks”), hereinafter referenced as Wei.
Regarding claim 1, Oh discloses: A finger language video recognition system comprising: one or more processors configured to: extract posture information of a speaker from a finger language video, the extracted posture information including a respective position of feature points on a respective boundary of face, hands, arms, and body of the speaker (Oh: 0067: “the sign language recognition apparatus 100 can detect and point the neck, waist, shoulder, elbow, wrist joint, etc. of the receiver from the obtained sign language video data.”; Paragraph 69: “the sign language recognition apparatus 100 can detect the hand region and the face region of the listener by detecting the face region in the detected face and hand regions (Face Detection Method).”;
0131: “the character display unit 120 recognizes the face expression of the receiver and can change the sentence type according to the facial expression. For example, if a recognized facial expression raises an eyebrow and expands the head forward, it can be changed to a question form.”; Wherein the shoulder constitutes the boundary of the body and the arms, the wrist constitutes the boundary of the hand and arm, and the eyebrow/head position detection constitutes detecting a boundary of the face);
and recognize a finger language of the speaker from the extracted posture information of the speaker in units of syllables (Oh: 0106: “9 (a) shows an example in which the letter "kwaku" is composed of fingerprint characters.”; 0105: “FIG. 9 shows a method in which a character recognition operation according to an embodiment of the present invention is recognized as a syllable.”),
and to output a text (Oh: Paragraph 40: “The sign language recognition apparatus 100 can display the characters determined in step S230 on the screen as text.”).
Oh does not disclose expressly: by using an Al model which receives, as an input, the posture information of the speaker.
Lim discloses: a finger language video recognition system (Lim: Figure 1), wherein the one or more processors are configured to recognize the finger language of the speaker from posture information, by using an Al model which receives, as input, posture information of a speaker (Lim: 0032: “…the sign language learning unit is configured to include a skeleton extract module that stores the sign language movements in real time, an artificial intelligence module that recognizes the sign language movements and classifies the sign language movements input in real time, and an interface module that transfers the sign language movements transferred from the skeleton extract module to the artificial intelligence module”), recognizes a finger language of the speaker in units of letters, and outputs a text (Lim: 0094: “…the sign language movement classified in the artificial intelligence module is translated into letters and transmitted to the interface module, and the interface module outputs the translated letters.”).
Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Lim into Oh since both Oh and Lim suggest a field of endeavor of a extracting posture from a video in general and Lim additionally provides teachings that can be incorporated into Oh in that the artificial intelligence module is able to recognize the finger language of the speaker from the posture information as to learn “…the importance of overall features of arm motion data and hand shape data…” (Lim: Paragraph 89). The teachings of Lim can be incorporated into Oh in that sign language recognition device taught in Oh could be improved by applying the known technique of using AI to recognize the finger language of a speaker, in units of letters, from their posture information as taught by Lim to Oh by substituting the algorithm used for recognizing the finger language from the extracted posture information present in Oh with the AI model taught by Lim. In addition, the AI model taught by Lim may be adapted to recognized the finger language of a speaker, in units of syllables, as opposed to units of letters, from their posture information as taught by Oh. Furthermore, one of ordinary skill in the art could have applied the known technique as claimed by known methods. One of ordinary skill in the art would have recognized that the results of the improvement would be predictable.
Oh in view of Lim does not disclose expressly: wherein the one or more processors are configured to train the Al model using training data, wherein, for the training the Al model, the one or more processors are further configured to: extract posture information of speakers from finger language training videos used for training; and augment the training data by processing data of the finger language training videos into the training data for training the Al model, using the extracted posture information.
Xu discloses: wherein one or more processors are configured to train an Al model using training data, wherein, for training the Al model, the one or more processors are further configured to: extract posture information of speakers from finger language training videos used for training (Xu: 0065: “Obtain skeleton data based on sign language video data, including sign language joint data and sign language bone data.”); and augment the training data by processing data of the finger language training videos into the training data for training the Al model, using the extracted posture information (Xu: 0008 & 0009: “Performing data fusion on the sign language joint data and the sign language skeleton data to form fused dynamic skeleton data, i.e., sign language joint-skeleton data; dividing the sign language joint-skeleton data into training data and test data;”).
Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the teachings of Xu into Oh in view of Lim since both Oh in view of Lim and Xu suggest a field of endeavor of a extracting posture from video in general and Xu additionally provides teachings that can be incorporated into Oh in view of Lim in that Lim teaches a system for training a sign language model as “…the training set data is trained and learned…which actually means that all weights and parameters…have been optimized for correct sign language classification.” (Xu: Paragraph 105; The training of the AI model allows for model optimization). The teaching of Xu can be incorporated into Oh in view of Lim in that the AI finger language recognition model as taught by Oh in view Lim could be improved by applying the known technique of training the sign language recognition model by extracting posture information from video and processing the extracted posture information into training data as taught by Xu. Furthermore, one of ordinary skill in the art could have applied the known technique as claimed by known methods. One of ordinary skill in the art would have recognized that the results of the improvement would be predictable.
Oh in view of Lim and Xu does not disclose expressly: augment the training data by processing data of the finger language training videos into the training data for training the Al model, using the extracted posture information, including modifying syllables forming a finger language word and generating virtual training data by combining matched posture information which is matched to a result of the modified syllables.
Li discloses: augmentation of training data by processing data of sign language training videos into the training data for training the Al model, using the extracted posture information, including editing words forming a sign language sentence and generating virtual training data by combining matched posture information which is matched to a result of the editing of words (Li: 0027: “During the training process, given a real video and its corresponding text, an editing operation is first performed to generate pseudo text. At the same time, pseudo videos are composed based on the video segments alignment labels obtained in the iterative optimization stage. Afterwards, the real and fake video-text pairs are passed into the same shared parameter recognition model and the CTC loss is calculated separately.”;
0032: “Given a real video-text pair, first replace, insert or delete a word in the real text, and repeat the operation multiple times; the new word inserted or replaced is randomly selected from the vocabulary in the training set; at the same time, perform corresponding operations on the real video according to the alignment label obtained in the iterative optimization stage, and the alignment label is the alignment label between the real video and the corresponding sign language word sequence obtained by decoding; then perform k editing operations, each operation is randomly selected from replacement, insertion and deletion, and at the same time, k is randomly selected from [1, K], K is the upper limit of the number of editing operations; through the above method, a number of pseudo video-text pairs that meet the requirements is generated, and the specific number can be set according to actual conditions.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to incorporate the algorithms for generating pseudo training data based on the replacement, insertion, and deletion of words in a sign language video taught by Li for the generation of training data for the finger language recognition system disclosed by Oh in view of Lim and Xu by editing syllables forming finger language words and aligning the edits with corresponding video segments. The suggestion/motivation for doing so would have been “…the recognition model to distinguish the difference between the real and augmented pseudo-data modalities, thereby improving the continuous sign language recognition performance.” (Li: Paragraph 13). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Oh in view of Lim, Xu, and Li does not disclose expressly: augment the training data by processing data of the finger language training videos into the training data for training the Al model, using the extracted posture information, including changing an order of syllables forming a finger language word and generating virtual training data by combining matched posture information which is matched to a result of the changing the order of syllables.
Wei discloses: the augmenting of the training data by changing the order of the words forming the sentences in order to generate training data (Wei: Page 1: Col 2: Paragraph 3 - Page 2: Col 1: Paragraph 5: “For a given sentence in the training set, we randomly choose and perform one of the following operations…Random Swap (RS): Randomly choose two words in the sentence and swap their positions. Do this n times…Furthermore, for each original sentence, we generate
n
a
u
g
augmented sentences.”).
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to implement the known technique of augmenting training data based on the random swapping of words forming a sentence as taught by Wei into the algorithms for generating training data by modifying syllables as disclosed by Oh in view of Lim and Xu, and Li by swapping syllables forming finger language words and aligning the swapped syllables with their corresponding video segments. The suggestion/motivation for doing so would have been “We have shown that simple data augmentation operations can boost performance on text classification tasks. Although improvement is at times marginal, EDA substantially boosts performance and reduces overfitting when training on smaller datasets” (Wei: Page 5: Col 2: Paragraph 3). Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Oh in view of Lim and Xu, and Li with Wei to obtain the invention as specified in claim 1.
Regarding claim 2, Oh in view of Lim and Xu, Li, and Wei discloses: The finger language video recognition system of claim 1,
Oh in view of Lim, Xu, Li, and Wei does not disclose expressly: wherein the posture information of the speaker is a skeleton model which is expressed by positions of feature points of face, hands, arms, and body of the speaker.
Lim further discloses: wherein the posture information of the speaker is a skeleton model (Lim: Figure 2) which is expressed by positions of feature points of face, hands, arms, and body of the speaker (Lim: 0037 & 0039: “The above points are provided including a plurality of hand points, which are feature points indicating center position information of the user's right and left hands… the joint points consist of a total of 24 points including the user’s head, both eyes, nose, mouth, both cheeks, chin, neck, both shoulders, the spine between both shoulders, the longitudinal center of the spine, both hips, the spine between both hips, both elbows, both wrists, both knees, and both ankles.”).
Thus, it would have been obvious for one of ordinary skill in the art, prior to filing, to implement the further teachings of Lim into Oh in view of Lim, Xu, Li, and Wei since both Oh in view of Lim, Xu, Li, and Wei and the further teaching of Lim suggest a field of endeavor of extracting posture from a video in general and Lim additionally provides teachings that can be incorporated into Oh in view of Lim, Xu, Li, and Wei in that the skeleton extraction module extracts feature points from throughout the face, hands, arms, and body as to allow for “…the arm movements and hand shapes during the sign language movement are preprocessed in the skeleton extraction module..” (Lim: Paragraph 34). The further teachings of Lim can be incorporated into Oh in view of Lim, Xu, Li, and Wei in that the skeleton generations taught by Oh in view of Lim, Xu, Li, and Wei could be improved by applying the known technique of extracting face, hands, arms, and body feature points as further taught by Lim. Furthermore, one of ordinary skill in the art could have applied the known technique as claimed by known methods. One of ordinary skill in the art would have recognized that the results of the improvement would be predictable. Therefore, it would have been obvious to combine Oh in view of Lim, Xu, Li, and Wei with the further teaching of Lim to obtain the invention as specified in claim 2.
Regarding claim 6, Oh in view of Lim, Xu, Li, and Wei disclose: The finger language video recognition system of claim 1, wherein the one or more processors further comprises a generator configured to generate virtual training data by utilizing a finger language word in units of syllables (Li: Paragraph 27: “During the training process, given a real video and its corresponding text, an editing operation is first performed to generate pseudo text. At the same time, pseudo videos are composed based on the video segments alignment labels obtained in the iterative optimization stage. Afterwards, the real and fake video-text pairs are passed into the same shared parameter recognition model and the CTC loss is calculated separately.”; Wherein the generated pseudo training data, generated according to the generation methods taught by Li and Wei, performed on sign language words based on units of syllables as taught by Oh, is generated by modifying the syllables in a finger language word and aligning video segments).
Regarding claim 8, Oh in view of Lim, Xu, Li, and Wei disclose: The finger language video recognition system of claim 6, wherein the generator comprises a second module configured to delete some of syllables forming a finger language word, and to generate virtual training data by combining matched posture information (Li: 0032: “Given a real video-text pair, first replace, insert or delete a word in the real text, and repeat the operation multiple times; the new word inserted or replaced is randomly selected from the vocabulary in the training set; at the same time, perform corresponding operations on the real video according to the alignment label obtained in the iterative optimization stage, and the alignment label is the alignment label between the real video and the corresponding sign language word sequence obtained by decoding; then perform k editing operations, each operation is randomly selected from replacement, insertion and deletion, and at the same time, k is randomly selected from [1, K], K is the upper limit of the number of editing operations; through the above method, a number of pseudo video-text pairs that meet the requirements is generated, and the specific number can be set according to actual conditions.”; Wherein the sign language words are based on units of syllables as taught by Oh, and wherein the generated pseudo training data is generated by deleting the syllables in a finger language word and aligning video segments).
Regarding claim 9, Oh in view of Lim, Xu, Li, and Wei disclose: The finger language video recognition system of claim 6, wherein the generator comprises a third module configured to add a new syllable to a finger language word, and to generate virtual training data by combining matched posture information (Li: 0032: “Given a real video-text pair, first replace, insert or delete a word in the real text, and repeat the operation multiple times; the new word inserted or replaced is randomly selected from the vocabulary in the training set; at the same time, perform corresponding operations on the real video according to the alignment label obtained in the iterative optimization stage, and the alignment label is the alignment label between the real video and the corresponding sign language word sequence obtained by decoding; then perform k editing operations, each operation is randomly selected from replacement, insertion and deletion, and at the same time, k is randomly selected from [1, K], K is the upper limit of the number of editing operations; through the above method, a number of pseudo video-text pairs that meet the requirements is generated, and the specific number can be set according to actual conditions.”; Wherein the sign language words are based on units of syllables as taught by Oh, and wherein the generated pseudo training data is generated by inserting the syllables in a finger language word and aligning video segments).
As per claim 10, arguments made in rejecting claim 1 are analogous.
As per claim 11, arguments made in rejecting claim 1 are analogous.
As per claim 12, arguments made in rejecting claim 2 are analogous.
As per claim 14, arguments made in rejecting claim 6 are analogous.
As per claim 16, arguments made in rejecting claim 8 are analogous.
As per claim 17, arguments made in rejecting claim 9 are analogous.
Claim(s) 5 & 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oh in view of Lim, Xu, Li, and Wei, and further in view of Ma et al. (Foreign Publication Number: CN112712003A) hereinafter referenced as Ma.
Regarding claim 5, Oh in view of Lim, Xu, Li, and Wei discloses: the finger language video recognition system of claim 1, wherein the one or more processors further comprises a generator configured to generate virtual training data by utilizing a finger language word in units of syllables (Li: 0032: “Given a real video-text pair, first replace, insert or delete a word in the real text, and repeat the operation multiple times; the new word inserted or replaced is randomly selected from the vocabulary in the training set; at the same time, perform corresponding operations on the real video according to the alignment label obtained in the iterative optimization stage, and the alignment label is the alignment label between the real video and the corresponding sign language word sequence obtained by decoding; then perform k editing operations, each operation is randomly selected from replacement, insertion and deletion, and at the same time, k is randomly selected from [1, K], K is the upper limit of the number of editing operations; through the above method, a number of pseudo video-text pairs that meet the requirements is generated, and the specific number can be set according to actual conditions.”; Wherein the generated pseudo training data, generated according to the generation methods taught by Li and Wei, performed on sign language words based on units of syllables as taught by Oh, is generated by modifying the syllables in a finger language word and aligning video segments).
Oh in view of Lim, Xu, Li, and Wei does not disclose expressly: wherein the one or more processors are configured to augment the posture information of the speaker.
Ma discloses wherein one or more processors are configured to augment the posture information of the speaker (Ma: Paragraph 10: “Step S2, performing data enhancement on each sample in the skeleton action sequence training set to obtain an enhanced training set;”)
Ma is a similar system to the claimed invention as evidenced Ma teaches a system for training a model to recognize gestures from skeletal action sequences extracted from video wherein the motivation of “the size of the training set can be increased…which can help the…recognition model learn a compact cluster for each category of samples without…expanding the data distribution.” (Ma: Paragraph 5) would have prompted a predictable variation of Oh in view of Lim, Xu, Li, and Wei by applying Ma’s known principle of a “processing unit is configured to augment the posture information of the speaker” (Ma: Paragraph 10: “Step S2, performing data enhancement on each sample in the skeleton action sequence training set to obtain an enhanced training set;”). When applying this known technique to Oh in view of Lim, Xu, Li, and Wei it would have been obvious to implement the data enhancement techniques as taught by Ma to Oh in view of Lim, Xu, Li, and Wei by performing data enhancement techniques on the extracted posture information of Oh in view of Lim, Xu, Li, and Wei for augmentation and processing the augmented data, aligned with finger language words in units of syllables, into training data using the processors of Oh in view of Lim, Xu, Li, and Wei for training the finger language recognition system of Oh in view of Lim, Xu, Li, and Wei.
Furthermore, one of ordinary skill in the art could have applied the known technique as claimed by known methods. One of ordinary skill in the art would have recognized that the results of the improvement would be predictable. Therefore, it would have been obvious to combine Oh in view of Lim, Xu, Li, and Wei with Ma to obtain the invention as specified in claim 5.
As per claim 13, arguments made in rejecting claim 5 are analogous.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J RODRIGUEZ whose telephone number is (703)756-5821. The examiner can normally be reached Monday-Friday 10am-7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached at (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANTHONY J RODRIGUEZ/
Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672