DETAILED ACTION
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention.
As amended, claims 1, 9 and 17 recite training one or more neural networks by obscuring subsets of pixels and learning “correspondences” among the subset of pixels in accordance with spatio-temporal relationships, and further recite generating pixels based on such correspondences and incorporating the generated pixels into a video stream.
The specifications, however, does not provide sufficient guidance to enable the full scope of the claimed invention. While the specifications provides limited examples, such as blurring portions of images (e.g. ¶162-163) and general references to spatio-temporal features (e.g. ¶192), it does not describe how the claimed “correspondences” among subsets of pixels are learned, represented, or utilized by the neural network. In particular, the specifications does not disclose how spatio-temporal relationships are incorporated into the training process to produce the claimed correspondences.
Further, the claim broadly encompasses any manner of generating pixels “in accordance with” the learned correspondences, yet the specifications does not describe how such correspondences are used to generate pixel data. The disclosure relating to substituting or superimposing pixels (e.g. ¶162) does not provide sufficient guidance to enable this functionality across the full scope of the claims. Accordingly, the specifications does not enable a person of ordinary skill in the art to make or use the claimed invention without undue experimentation, particularly in view of the breadth of the claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-6, 9, 11-14, 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dhua et al (US Patent 10049308 B1), in view of Basu et al (US 20040205482), in further view of Reddy et al (US 20180239501).
Regarding claim 1, Dhua discloses a computer-implemented method, comprising:
performing training of one or more computer-implemented neural networks using one or more training sets of content comprising a first plurality of images (col 3 lines 54-67 deep neural networks can be trained using a set of training images exhibiting different class labels for items and including information detailing those label selections. In other embodiments, generative adversarial networks (GANs) can be used that do not require the data seeding used for training deep neural networks. Various other approaches can be used as well as discussed and suggested elsewhere herein. Deep neural networks, or deep learning, can involve representing images or other content items as vectors or sets of edges or regions to simplify the learning task. These processes can allow for unsupervised learning and hierarchical feature extraction, among other such options), wherein the first plurality of images collectively comprise a first plurality of pixels (Fig. 1 & col 2 lines 46-51 a set of images 102 is obtained that can be used to train one or more neural networks 106 to recognize various types of items represented in those images; col 5 lines 13-46 discusses pixels within the images);
performing the training using processor hardware designed to perform cognitive computing, wherein the training comprises obscuring one or more subsets of the first plurality of pixels (col 12 lines 25-56 process 900 for generating a synthesized training image; An item mask can then be generated 908 based upon the locations of the background pixels, as a binary mask would include discriminate between background pixel locations and non-background pixel locations, which could also be identified as item pixel locations; The item region or portion can be determined using the mask and then blended 914 into the selected background region to generate a synthesized training image that has minimal edge artifacts resulting from the item region selection.);
learning automatically by the one or more computer-implemented neural networks during the training correspondences among a plurality of subsets of the first plurality of pixels (col 11 lines 35-60 For at least some of the images, such as a randomly selected subset or another such determination, text or other content associated with the images can be analyzed to determine whether one or more items represented in those images correspond to a classification for which a neural network is to be trained);
receiving an instruction from a user (col 3 lines 4-45 the neural network can be provided to a classifier 112 that is able to accept query images 114 from various sources, such as customers or end users, and generate classifications 116 for items represented in those images);
generating, in response to the instruction, a second plurality of pixels that are in accordance with the correspondences learned by the one or more computer-implemented neural networks that underwent the training (col 3 lines 4-45 the neural network can be provided to a classifier 112 that is able to accept query images 114 from various sources, such as customers or end users, and generate classifications 116 for items represented in those images; col 9 lines 39-46 Similarly, FIG. 5B illustrates another example interface 550 that can be utilized in accordance with various embodiments. In this example, a query image 552 has been provided and instead of displaying information about that item, which may or may not be available, the interface displays content for result items 554 that are of the same classification as the item represented in the query image).
Dhua fails to teach where Basu teaches learning automatically by the one or more computer-implemented neural networks (¶28 e.g. neural network classifiers) during the training correspondences among a plurality of subsets of the first plurality of pixels in accordance with spatio temporal relationships of the plurality of subsets of the first plurality of pixels (¶14 build efficient and accurate models of semantic concepts using supervised training methods. Different types of relationships can be used to assist the use, such as spatio-temporal similarity, temporal proximity, and semantic proximity. Spatio-temporal similarity between regions or blobs of image sequences can be used to cluster the blobs in the videos before the annotation task begins.).
Dhua further fails to teach where Reddy teaches including automatically the second plurality of pixels within a video stream comprising a sequence of a second plurality of images (¶46 the output renderers component 135 includes a video renderer 155 configured to generate video signals to cause a video output device 140 to generate a display of the query text 120 and then response options 130 for a response field 125. For example, the video renderer 155 can format the query text 120 according in a particular font, size, color, and layout. The video renderer 155 may also render the response options 130 for a response field 125 for display, including to a particular font, size color, and layout, including position relative to the query text 120; ¶51 query text and the response options can be formatted to be output to a display. The query is rendered to the output device at 230); and
providing the video stream to the user (¶51 query text and the response options can be formatted to be output to a display. The query is rendered to the output device at 230; ¶97 the rendering engine 532 includes an audio renderer 534 and a video renderer 536 (e.g., that renders content to a display, which content can be a static user interface or an interface that includes video content as a sequence of frames or images)).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of learning automatically by the one or more computer-implemented neural networks during the training correspondences among a plurality of subsets of the first plurality of pixels in accordance with spatio temporal relationships of the plurality of subsets of the first plurality of pixels from Basu, and the teaching of including automatically the plurality of pixel patterns within a video stream comprising a sequence of a second plurality of images, and providing the video stream to the use from Reddy, into the method as disclosed by Dhua. The motivation for doing this is to improve efficient interactive annotation or labeling of unlabeled data and to improve interaction mechanisms with devices.
Regarding claim 3, the combination of Dhua, Basu and Reddy disclose the method of claim 1, wherein the obscuring of the one or more subsets of pixels comprises performing a blurring of the one or more subsets of pixels (Dhua col 7 lines 64-67 to col 8 lines 1-3 An amount of blending can be performed at this region, to attempt to create a smooth transition instead of an abrupt change at the edge of the mask where the pixel values are excluded from consideration. In some embodiments the blending can be performed by blurring the binary mask with a Gaussian kernel).
Regarding claim 4, the combination of Dhua, Basu and Reddy disclose the method of claim 1, wherein the obscuring of the one or more subsets of pixels is performed with respect to one or more identified objects (Dhua col 12 lines 25-56 process 900 for generating a synthesized training image; An item mask can then be generated 908 based upon the locations of the background pixels, as a binary mask would include discriminate between background pixel locations and non-background pixel locations, which could also be identified as item pixel locations; The item region or portion can be determined using the mask and then blended 914 into the selected background region to generate a synthesized training image that has minimal edge artifacts resulting from the item region selection.).
Regarding claim 5, the combination of Dhua, Basu and Reddy disclose the method of claim 1, wherein the instruction from the user comprises one or more images (Dhua col 3 lines 4-45 the neural network can be provided to a classifier 112 that is able to accept query images 114 from various sources, such as customers or end users, and generate classifications 116 for items represented in those images).
Regarding claim 6, the combination of Dhua, Basu and Reddy disclose the method of claim 1, wherein the learned correspondences among the plurality of subsets of the plurality of pixels comprise learned correspondences for which the corresponding subsets of the plurality of pixels are in a plurality of the first plurality of images and the corresponding subsets are each associated with one or more training syntactical elements (Dhua col 10 lines 35-60 The classification can be determined using a trained classifier, such as may utilize a convolutional neural network 622 or other such deep network or machine learning algorithm, etc. A training component 620 can perform the training on the models and provide the resulting results and/or trained models for use in determining the appropriate classifications; col 11 lines 35-60 For at least some of the images, such as a randomly selected subset or another such determination, text or other content associated with the images can be analyzed to determine whether one or more items represented in those images correspond to a classification for which a neural network is to be trained; col 12 lines 5-14 During processing the item portion of the image can be determined and/or isolated 814 for use in generating at least one training image. A random region of one of the subset of background images can be selected 816 as a background for the synthesized image).
Regarding claim(s) 9 and 11-13 (drawn to a system):
The rejection/proposed combination of Dhua, Basu and Reddy, explained in the rejection of method claim(s) 1, 3-4 and 6, anticipates/renders obvious the steps of the system of claim(s) 9 and 11-13 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 1, 3-4 and 6 is/are equally applicable to claim(s) 9 and 11-13.
Regarding claim 14, the combination of Dhua, Basu and Reddy disclose the system of claim 9, wherein the instruction from the user comprises a plurality of syntactical elements (Reddy ¶51 At 220, an input type is determined for a constrained user input device. The input type can be, for example, audio input (e.g., speech or other oral input), neural input (e.g., an EEG headset), motion input (e.g., a pointing device, camera, or hardware positional sensors), or other types of input). The motivation to combine the references is discussed above in the rejection for claim 1 and 9.
Regarding claim 17, Dhua discloses a mobile apparatus (Fig. 6 a computing device 602) comprising:
interpret the instruction by applying one or more trained computer-implemented neural networks (col 3 lines 4-45 the neural network can be provided to a classifier 112 that is able to accept query images 114 from various sources, such as customers or end users, and generate classifications 116 for items represented in those images);
generate, in response to the interpreted instruction, a second plurality of pixels by applying correspondences learned by one or more trained computer-implemented neural networks (col 3 lines 4-45 the neural network can be provided to a classifier 112 that is able to accept query images 114 from various sources, such as customers or end users, and generate classifications 116 for items represented in those images; col 9 lines 39-46 Similarly, FIG. 5B illustrates another example interface 550 that can be utilized in accordance with various embodiments. In this example, a query image 552 has been provided and instead of displaying information about that item, which may or may not be available, the interface displays content for result items 554 that are of the same classification as the item represented in the query image),
wherein the one or more trained computer-implemented are trained by performing training using one or more training sets of content comprising a first plurality of images that collectively comprise first plurality of pixels (col 3 lines 54-67 deep neural networks can be trained using a set of training images exhibiting different class labels for items and including information detailing those label selections. In other embodiments, generative adversarial networks (GANs) can be used that do not require the data seeding used for training deep neural networks. Various other approaches can be used as well as discussed and suggested elsewhere herein. Deep neural networks, or deep learning, can involve representing images or other content items as vectors or sets of edges or regions to simplify the learning task. These processes can allow for unsupervised learning and hierarchical feature extraction, among other such options; Fig. 1 & col 2 lines 46-51 a set of images 102 is obtained that can be used to train one or more neural networks 106 to recognize various types of items represented in those images; col 5 lines 13-46 discusses pixels within the images), wherein the training comprises obscuring one or more subsets of the first plurality of pixels (col 12 lines 25-56 process 900 for generating a synthesized training image; An item mask can then be generated 908 based upon the locations of the background pixels, as a binary mask would include discriminate between background pixel locations and non-background pixel locations, which could also be identified as item pixel locations; The item region or portion can be determined using the mask and then blended 914 into the selected background region to generate a synthesized training image that has minimal edge artifacts resulting from the item region selection.),
Dhua fails to teach where Basu teaches wherein correspondences among a plurality of subsets of the first plurality of pixels are learned by the one or more computer-implemented neural networks (¶28 e.g. neural network classifiers) in accordance with spatio-temporal relationships of the plurality of subsets of the first plurality of pixels during the training (¶14 build efficient and accurate models of semantic concepts using supervised training methods. Different types of relationships can be used to assist the use, such as spatio-temporal similarity, temporal proximity, and semantic proximity. Spatio-temporal similarity between regions or blobs of image sequences can be used to cluster the blobs in the videos before the annotation task begins.);
Dhua fails to teach where Reddy teaches one or more cameras and associated circuitry (¶51 motion input (e.g., a pointing device, camera, or hardware positional sensors)));
a microphone and associated circuitry (¶87 the user configuration information for the particular input device type (e.g., a microphone, regardless of the particular model, may be handled by a common device interface 528)); and
one or more hardware processors, wherein at least one of the one or more hardware processors is designed to perform cognitive computing (¶116 A processing unit can be a general-purpose central processing unit (CPU), processor in an application-specific integrated circuit (ASIC), or any other type of processor; The memory 720, 725 stores software 780 implementing one or more innovations described herein, in the form of computer-executable instructions suitable for execution by the processing unit(s) 710, 715. The memory 720, 725, may also store database data, such as data associated with the database 512 of FIG. 5), wherein the one or more processors are configured to:
receive an instruction from a user, wherein the instruction comprises information received from the microphone (¶51 At 220, an input type is determined for a constrained user input device. The input type can be, for example, audio input (e.g., speech or other oral input), neural input (e.g., an EEG headset), motion input (e.g., a pointing device, camera, or hardware positional sensors), or other types of input. A query of the one or more queries is converted to an output format for the output type, and optionally the input type, at 225; ¶53 a query 312 can be a question, but can be a more general request or indication for user input);
interpret the instruction by applying one or more trained computer-implemented neural networks (¶51 A query of the one or more queries is converted to an output format for the output type, and optionally the input type, at 225; ¶84 Heuristics, machine learning, and other feedback or training techniques can be used to analyze user behavior and adjust thresholds for a user action.);
include automatically the second plurality of pixel patterns within a video stream comprising a sequence of a second plurality of images (¶46 the output renderers component 135 includes a video renderer 155 configured to generate video signals to cause a video output device 140 to generate a display of the query text 120 and then response options 130 for a response field 125. For example, the video renderer 155 can format the query text 120 according in a particular font, size, color, and layout. The video renderer 155 may also render the response options 130 for a response field 125 for display, including to a particular font, size color, and layout, including position relative to the query text 120; ¶51 query text and the response options can be formatted to be output to a display. The query is rendered to the output device at 230); and
provide the video stream to the user (¶51 query text and the response options can be formatted to be output to a display. The query is rendered to the output device at 230; ¶97 the rendering engine 532 includes an audio renderer 534 and a video renderer 536 (e.g., that renders content to a display, which content can be a static user interface or an interface that includes video content as a sequence of frames or images)).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein correspondences among a plurality of subsets of the first plurality of pixels are learned by the one or more computer-implemented neural networks in accordance with spatio-temporal relationships of the plurality of subsets of the first plurality of pixels during the training from Basu, and the teaching of one or more cameras and associated circuitry; a microphone and associated circuitry; and one or more hardware processors, wherein at least one of the one or more hardware processors is designed to perform cognitive computing, wherein the one or more processors are configured to: receive an instruction from a user, wherein the instruction comprises information received from the microphone, interpret the instruction by applying one or more trained computer-implemented networks, include automatically the plurality of pixel patterns within a video stream comprising a sequence of a second plurality of images, and provide the video stream to the user from Reddy into the method as disclosed by Dhua. The motivation for doing this is to improve efficient interactive annotation or labeling of unlabeled data and to improve interaction mechanisms with devices.
Regarding claim 18, the combination of Dhua, Basu and Reddy disclose the apparatus of claim 17, wherein the apparatus comprises a wearable device (Reddy ¶38 Examples, of output devices 140 includes audio output devices (e.g., speakers, headphones, virtual reality headsets) and video output devices (e.g., a monitor, television, virtual reality headset, touchscreen, laptop display, projector). The motivation to combine the references is discussed above in the rejection for claim 17.
Regarding claim 19, the combination of Dhua, Basu and Reddy disclose the apparatus of claim 18, wherein the video stream comprises an augmented reality that uses information provided by the one or more cameras (Reddy ¶38 Examples, of output devices 140 includes audio output devices (e.g., speakers, headphones, virtual reality headsets) and video output devices (e.g., a monitor, television, virtual reality headset, touchscreen, laptop display, projector). The motivation to combine the references is discussed above in the rejection for claim 17.
Claim(s) 2 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Dhua, Basu and Reddy as applied to claim 1 and 9 above, and further in view of Kwatra et al (US Patent 8923607 B1).
Regarding claim 2, the combination of Dhua, Basu and Reddy disclose the method of claim 1, but fails to teach where Kwatra wherein the first plurality of images are sequentially arranged in one or more videos (col 9 lines 30-44 As a preliminary step it is assumed that the video clip features 500 have been extracted from the video clip. The video clip features 500 are the sequentially ordered frames in the video).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the first plurality of images are sequentially arranged in one or more videos from Kwatra into the method as disclosed by the combination of Dhua, Basu and Reddy. The motivation for doing this is to improve learning techniques to detect and identify objects in videos.
Regarding claim(s) 10 (drawn to a system):
The rejection/proposed combination of Dhua, Basu, Reddy, and Kwatra explained in the rejection of method claim(s) 2, anticipates/renders obvious the steps of the system of claim(s) 10 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 2 is/are equally applicable to claim(s) 10.
Claim(s) 7 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Dhua, Basu and Reddy as applied to claim 1 and 9 above, and further in view of Chester et al (US 20170262433).
Regarding claim 7, the combination of Dhua, Basu and Reddy disclose the method of claim 1, but fail to teach where Chester teaches wherein generating the plurality of second pixels is in accordance with one or more automatically determined probabilities (¶41 The processor 236 of the server 130, upon receiving the search query for the image search engine 256, is configured to submit the search request for the search query to the image search engine 256; configured to provide a listing of the plurality of images with a ranking (or prioritization) according to user interaction probabilities of the corresponding visual words (e.g., using the visual-word-interaction-probability data 244). The listing of the plurality of images that is prioritized (or ranked) according to the user interaction probabilities is provided, for example, by the processor 236 of the server 130 being configured to submit the plurality of images to the convolutional neural network 234 prior to the search query being received, and the convolutional neural network 234 identifying the language terms associated with each of the plurality of images).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein generating the plurality of second pixels is in accordance with one or more automatically determined probabilities from Chester into the method as disclosed by the combination of Dhua, Basu and Reddy. The motivation for doing this is to improve methods and systems for providing images mapped to spoken-language terms.
Regarding claim(s) 15 (drawn to a system):
The rejection/proposed combination of Dhua, Basu, Reddy, and Chester explained in the rejection of method claim(s) 7, anticipates/renders obvious the steps of the system of claim(s) 15 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 7 is/are equally applicable to claim(s) 15.
Claim(s) 8, 16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Dhua, Basu and Reddy as applied to claim 1, 9, and 17 above, and further in view of Bull et al (US 20100064053).
Regarding claim 8, the combination of Dhua, Basu and Reddy disclose the method of claim 1, but fail to teach where Bull teaches wherein the video stream is further generated in accordance with an inference of a preference of the user that is based on a plurality of usage behaviors that occur before the instruction from the user is received (¶53 the playback of personalized content to be integrated with the streaming content; ¶75 the personalized content may include one or more still images or motion picture sequences dynamically generated from text, images, audio data, and/or video data; the personalized content may be generated based on usage history, user preferences, device configurations, or playback rules from a third-party (e.g., remote media provider)).
Therefore, it would have been obvious to one with ordinary skill in the art before the effective filing date of the invention to have implemented the teaching of wherein the video stream is further generated in accordance with an inference of a preference of the user that is based on a plurality of usage behaviors that occur before the instruction from the user is received from Bull into the method as disclosed by the combination of Dhua, Basu and Reddy. The motivation for doing this is to provide enhanced playback of personalized or synthesized content in addition to streaming content.
Regarding claim(s) 16 (drawn to a system):
The rejection/proposed combination of Dhua, Basu, Reddy, and Bull explained in the rejection of method claim(s) 8, anticipates/renders obvious the steps of the system of claim(s) 16 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 8 is/are equally applicable to claim(s) 16.
Regarding claim(s) 20 (drawn to an apparatus):
The rejection/proposed combination of Dhua, Basu, Reddy, and Bull explained in the rejection of method claim(s) 8, anticipates/renders obvious the steps of the apparatus of claim(s) 20 because these steps occur in the operation of the proposed combination as discussed above. Thus, the arguments similar to that presented above for claim(s) 8 is/are equally applicable to claim(s) 20.
Response to Arguments
Applicant's arguments filed 3/11/2026 have been fully considered but they are not persuasive.
The applicant argues that the claims as amended have full support in the specifications and thus are enabled. Regarding the arguments, the examiner respectfully disagrees. The claims, as amended, recite training one or more neural networks by obscuring subsets of pixels and learning “correspondences” among the subset of pixels in accordance with spatio-temporal relationships, and further recite generating pixels based on such correspondences and incorporating the generated pixels into a video stream. The specifications, however, does not provide sufficient guidance to enable the full scope of the claimed invention. While the specifications provides limited examples, such as blurring portions of images (e.g. ¶163) and general references to spatio-temporal features (e.g. ¶192), it does not describe how the claimed “correspondences” among subsets of pixels are learned, represented, or utilized by the neural network. In particular, the specifications does not disclose how spatio-temporal relationships are incorporated into the training process to produce the claimed correspondences.
Further, the claim broadly encompasses any manner of generating pixels “in accordance with” the learned correspondences, yet the specifications does not describe how such correspondences are used to generate pixel data. The disclosure relating to substituting or superimposing pixels (e.g. ¶162) does not provide sufficient guidance to enable this functionality across the full scope of the claims. Accordingly, the specifications does not enable a person of ordinary skill in the art to make or use the claimed invention without undue experimentation, particularly in view of the breadth of the claims. See above rejection under 35 U.S.C. 112(a).
Applicant’s additional arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN KY whose telephone number is (571)272-7648. The examiner can normally be reached Monday-Friday 9-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEVIN KY/Primary Examiner, Art Unit 2671