DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 01/26/2026 has been entered. Applicant’s amendments to the claims have overcome each and every objection previously set forth in the Non-Final Office Action dated 10/27/2025. Claims 1, 4-5, 8, 11-12, 15, 18-19, and 21 remain pending in the application, with claims 6, 13, and 20 having been cancelled and claim 21 being newly added.
Response to Arguments
Applicant’s arguments with respect to the independent claims (pg. 12-14 of Remarks of 01/26/2025) have been considered but are moot because the new ground of rejection does not rely on any combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
In the last paragraph on pg. 14 through pg. 16 of Remarks of 01/26/2025, Applicant argues that “Applicant does not find a legitimate reason for a person of ordinary skill in the art to combine the Cohen reference with the Powles reference” due to the specific field of the Powles application, which focuses on generating building designs. Examiner respectfully disagrees. An aspect of the claimed invention and Cohen’s invention includes generating replacement objects in images. Powles utilizes a generative adversarial network (GAN) to generate image elements (para 477: “use a StackGAN to synthesise an image from the text, and the resulting image analysed by one or more GANs to determine whether it makes sense in the context of the plan”), a well-known method in the art of synthetic image generation. The combination relies on choosing and utilizing a GAN model, an overall process that is unaffected by the type of images produced (elements of a building plan, with respect to Powles). Furthermore, the Cohen reference doesn’t limit the type of images/objects that may be generated, and could arguably include building construction plans if implemented by the user. Thus, both references are within the field of generative image models, their use, and object detection. A person having ordinary skill in the art would have looked for solutions for generating image elements in the reference of Powles. In para 476, cited as the motivation for the combination, Powles describes how multiple GANs may be used and compared to determine one with the best confidence accuracy for a particular object. A person having ordinary skill in the art may look to the Powles reference to determine ways to improve the accuracy in their object generation.
MPEP 2141.01(a)(I) states: “In order for a reference to be proper for use in an obviousness rejection under 35 U.S.C. 103, the reference must be analogous art to the claimed invention. In re Bigio, 381 F.3d 1320, 1325, 72 USPQ2d 1209, 1212 (Fed. Cir. 2004). A reference is analogous art to the claimed invention if: (1) the reference is from the same field of endeavor as the claimed invention (even if it addresses a different problem); or (2) the reference is reasonably pertinent to the problem faced by the inventor (even if it is not in the same field of endeavor as the claimed invention).” Though Powles and the claimed invention are in the same field of endeavor of image object detection and generation, consider further from MPEP 2141.01(a)(I): “A reference outside of the field of endeavor is reasonably pertinent if a person of ordinary skill would have consulted it and applied its teachings when faced with the problem that the inventor was trying to solve.” As described above, the Powles reference is reasonably pertinent to the problem of improving GAN accuracy while generating a specific object. The rejection is maintained.
On pg. 16 of the Remarks of 01/26/2025, Applicant argues that Powles does not teach the requirements of claims 4, 11, and 18. Examiner respectfully disagrees. The list of replacement object keywords is taught by Cohen (see claim 1 rejection). These keywords describe the type of object to be generated. Powles describes the relied upon “GAN selection module” in para 475-481, attached below. Based on keywords describing the object to be generated, the GAN selection module selects an appropriate GAN to synthesize the image. Cited para 477 is further supported by the example given in para 479: “For example, the doors on the architectural plan may be generated using a GAN dedicated to the task of generating doors” and the summary in para 481: “Accordingly, the present technology provides systems and methods of selecting between generative adversarial networks to optimize the generation of architectural plans using at lease one GAN selection module.” Thus, Powles teaches a method for selecting a GAN based on at least one keyword. In the example described by Powles, “doors” is the replacement object keyword list utilized by the GAN selection module to select a GAN dedicated to the task of generating doors. In view of the foregoing, Powles teaches the relevant limitations in claims 4, 11, and 18, and the rejection is maintained.
PNG
media_image1.png
499
381
media_image1.png
Greyscale
PNG
media_image2.png
380
378
media_image2.png
Greyscale
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 8, 15 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen et al. (U.S. Patent No. 2019/0196698 A1), hereinafter Cohen, in view of Lievens et al. (U.S. Patent No. 2016/0323627 A1), hereinafter Lievens, Pathak et al. (Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A., Context encoders: Feature learning by inpainting, 2016, In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2536-2544).), hereinafter Pathak, Jin et al. (CN Patent No. 107993724 A), hereinafter Jin, and Yang et al. (CN Patent No. 113158245 A), hereinafter Yang.
Regarding claim 1, Cohen teaches a system (Cohen, para 37: “image enhancement system 110”; Figure 1) comprising:
a memory (Cohen, para 43: “Storage 126 stores and provides access to and from memory included in storage 126 for any suitable type of data”); and
a processor in communication with the memory (Cohen, para 42: “executing instructions stored on storage 126 on processors 124”), the processor being configured to perform processes (Cohen, para 42: “image enhancement system 110 may be implemented at least partially by executing instructions stored on storage 126 on processors 124”) comprising:
receiving inputs including a media (Cohen, para 41: “image to be edited 106”; received by Image Enhancement System 110), a targeted object keyword list (Cohen, para 66: “objects ‘car’ and ‘truck’ and adjective ‘blue’ as keywords”; see example user input, para 52: “USER: The boring sky”), and a command (Cohen, para 66: “remove request or a replace request”; see example user input, para 50: “USER: Yes, replace”; keyword list and command are received by Image Enhancement System, para 57: “stored in conversation data 128 of storage 126”), wherein the command is selected from the group consisting of a remove command and a replace command (Cohen, para 66: “remove request or a replace request”);
to generate an expanded targeted object keyword list (Cohen, synonyms of words in the editing query, para 71: “language module 148 can generate synonyms of words in an editing query, and the synonyms can be used to construct a search query”);
feeding the expanded targeted object keyword list into an object detector selector (Cohen, fed to vision module 146, para 71: “may generate the word “reflection” as a synonym to “ghost”, that can be passed to vision module 146 as an object to be segmented”);
selecting, by the object detector selector, one or more object detector models relevant for each keyword from the expanded targeted object keyword list (Cohen, para 58: “In one example, vision module 146 includes one or more neural networks that have been trained to identify a specific object, such as a sky, fire hydrant, background, car, person, face, and the like. Hence, vision module 146 can use a neural network trained to identify a specific object indicated by an editing query when ascertaining pixels of the object in the image to be edited. For instance, if an editing query includes a remove request “Remove the fire hydrant”, vision module 146 ascertains pixels in the image that correspond to a fire hydrant using a neural network trained to identify fire hydrants with training images including variations of fire hydrants”; see para 71 wherein synonyms are also sent to the vision module, therefore the object detector selector utilizes the expanded keyword list);
determining, by the object detector selector (Cohen, para 71: vision module 146) and using the one or more object detector models (Cohen, para 58: “one or more neural networks that have been trained to identify a specific object”), a target object area in the media (Cohen, para 58: “pixels of an image to be edited corresponding to an object to be removed”);
feeding both the media (Cohen, para 80: “image”) and the target object area (Cohen, para 80: “object to be removed”) into an area filler module (Cohen, fed to compositing module 152, para 80: “compositing module 152 removes content from pixels of an image to be edited (e.g., corresponding to an object to be removed)”);
based on the remove command, removing a targeted object from the target object area (Cohen, para 80: “Responsive to an editing query including a remove request, compositing module 152 removes content from pixels of an image to be edited (e.g., corresponding to an object to be removed)”) and generating, by the area filler module, a background in the target object area of the media (Cohen, para 80: “Compositing module 152 is representative of functionality configured to enhance an image to be edited by compositing fill material, replacement material”; para 19: “In one example, fill material is recognized as similar to different pixels of the image than the pixels of the image corresponding to an object to be removed. For instance, when removing a fire hydrant from a lawn, the fill material may be similar to pixels of the lawn”; see an example in Figure 3 and para 97-103 wherein a woman is removed from an image); and
based on the replace command, receiving a replacement object keyword list (Cohen, para 75: editing query for replacement material; see “semi-truck, freightliner” in para 76) and generating a replacement object for the targeted object based on the replacement object keyword list (Cohen, para 81: “responsive to an editing query including a replace request, compositing module 152 replaces content of pixels of an image to be edited (e.g., corresponding to an object to be replaced) with replacement material to form one or more composite images”; replacement object, para 77: “image search module 150 obtains images other than image to be edited 106 that can be used to enhance image to be edited 106, such as by adding fill material or replacement material from an image obtained by image search module 150”; see an example in Figure 2 and para 93-96 wherein the replacement object is a cloudy sky), wherein generating the replacement object for the targeted object based on the replacement object keyword list further comprises, to create additional entries in the replacement object keyword list (Cohen, para 76: “a search query can include forming a query string including combinations of words from a user conversation with synonyms of other words from the user conversation, such as by forming a search string from the combination of “semi-truck, freightliner” for the editing query including the replace request “Replace the Peterbilt lorry with a Freightliner”, where semi-truck is a synonym for lorry”), wherein the additional entries comprise one or more specific objects (Cohen, semi-truck) based on the keywords (Cohen, lorry) to generate the replacement object (Cohen, para 76: “receives any suitable information and instruction to obtain images including fill material or replacement material”).
Cohen fails to teach 1) feeding the targeted object keyword list into a semantic graph expansion module to generate an expanded targeted object keyword list; 2) in response to determining that no suitable object detector model is available, generating an object detector model for the expanded targeted object keyword list; 3) wherein generating the background in the target object area further comprises generating the background based on a latent representation; and 4) wherein generating the replacement object for the targeted object based on the replacement object keyword list further comprises, in response to identifying that no suitable keywords are provided from the replacement object keyword list, feeding the replacement object keyword list into the semantic graph expansion module to create additional entries in the replacement object keyword list, wherein the additional entries comprise one or more specific objects to generate the replacement object (emphasis added).
However, Lievens teaches a similar object detection method (Lievens, abstract: “selecting based on the category of the object an appropriate object detector model from at least one object detector model associated with said category. Thereafter a location of the object in the image of the multimedia asset is determined based on the object detector model that is selected”), including in response to determining that no suitable object detector model is available, generating an object detector model for the targeted object keyword list (Lievens, para 108: “If there is no object detector model available for a certain object, a new object detector model is created by selecting a region of the first image of the multimedia asset MA including at least a fragment of the object X, where the object first is associated with a category, i.e. people; female, actress and subsequently an Object Detector model is extracting based on image information in the region of the first image including at least the fragment of the object, i.e. object X, said image in said region being obtained from said object detection means, a new object detector model is created which may be applied for in the annotating of next similar objects”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the generating an object detector model when none are suitable, taught by Lievens, with the expanded targeted object keyword list and object detector selector in the system of Cohen in order to decrease the chance of classification errors by using object detectors specific to the type of object (Lievens, para 112: “creating a completely new object detector in order to deal with possible classification errors (i.e. that the classifier did not correctly detect the object)”).
Additionally, Pathak teaches a model that generates a background in the target object area of an image (Pathak, see Figures 1 and 2 attached below) and wherein generating the background in the target object area further comprises generating the background based on a latent representation (Pathak, pg. 2538 section 3.1: “The overall architecture is a simple encoder-decoder pipeline. The encoder takes an input image with missing regions and produces a latent feature representation of that image. The decoder takes this feature representation and produces the missing image content.”). Cohen discloses a system for generating the background in an image area, but does not disclose use of a latent representation to do so, and instead teaches use of a fill/replacement material from an image (Cohen, use of another image, para 20, or the image being edited, para 75). A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the background generation method of Cohen could have been substituted for the encoder/decoder method, utilizing a latent representation, of Pathak because both serve the purpose of generating the background in an image area. Furthermore, the use of an encoder/decoder architecture to reconstruct an image is well known in the art and a person of ordinary skill in the art would have been able to carry out the substitution. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the fill material of Cohen for the image content based on a latent representation, taught by Pathak, according to known methods to yield the predictable result of filling the image area with background pixels based on features from the original image.
PNG
media_image3.png
582
946
media_image3.png
Greyscale
Additionally, Jin teaches a search query system (Jin, para 106-115, “Obtain user query question data, and extract question keywords from the user query question data…Perform a matching search”) wherein in response to identifying that no suitable keywords are provided (Jin, see para 111 citation as follows, if the search fails, then no suitable keywords were provided to produce a successful result), feeding the keyword list into a semantic expansion module to create additional entries in the keyword list (Jin, para 111: “If the matching search fails, semantic expansion processing is performed on the question keyword to obtain a synonym group of the hyponyms of the question keyword and its synonyms”), wherein the additional entries comprise one or more specific objects to generate a search result (Jin, para 111: “based on the synonym group of the hyponyms of the question keyword and its synonyms, a matching search is continued in the pre-set question-answer pair knowledge base and rule knowledge base to generate a search result list containing the search results”).
Cohen teaches wherein synonyms are generated for a query, and the synonyms are further used as input to generate a replacement object. Cohen does not specify exactly how the synonyms are generated, for both the targeted object keyword list and the replacement object keyword list (Cohen, para 74 suggests that a table of synonyms are stored, but not how they are obtained for the editing query) or wherein the additional entries are created in response to a case where no keywords are suitable for the generating task (in the case of the replacement object keyword list). Similar to Cohen, Jin teaches wherein synonyms are generated for a query, and used as input for a subsequent task. A person having ordinary skill in the art would be able to utilize the query method of Jin with the editing query of Cohen (Cohen para 80 citation above) to improve the object generation task, in the same way, as the search matching task of Jin.
Jin teaches a known technique of utilizing semantic expansion to generate synonyms for keywords. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by Jin, in the same way to the system of Cohen and achieved predictable results of improving the efficiency and accuracy of the subsequently performed task by fully expressing the query concept with synonyms. See further paragraph 100 of Jin (Jin, para 100: “words with synonymous relationships can be understood as the same concept. When searching, the search scope can be expanded or narrowed through the upper and lower semantic relationships in the ontology to improve the search efficiency. The question-answering results are more comprehensive and accurate than those of the first existing technology”).
Further, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the response to no suitable keywords, as taught by Jin, with the system of Cohen, in order to further improve the accuracy of the subsequently performed task with the use of semantically related synonyms (Jin, para 115: “perform semantic expansion processing on question keywords that fail to match and search again”). Instead of producing no object generation results or an inaccurate result when there are no suitable keywords, the combination with the teachings of Jin utilizes the benefits of keyword expansion to improve the task until it is successful.
Jin teaches performing semantic expansion on keywords, but fails to explicitly teach a semantic graph expansion module (emphasis added). Yang teaches feeding a keyword list into a semantic graph expansion module to create additional entries in the keyword list (Yang, para 114: “Semantically expand the query keywords according to the preset semantic relationship graph to obtain an expanded set of query keywords”). A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the semantic relationship graph of Yang could have been substituted for the semantic expansion method of Jin because both serve the purpose of semantic expansion of a keyword list. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of an expanded list of query keywords to improve the performance of a subsequent task. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the semantic relationship graph of Yang for the semantic expansion method of Jin, in the combination of Cohen and Jin, according to known methods to yield the predictable result of improving the precision of the subsequent task to generate a replacement object that reflects the expanded keyword list using semantic relationships between keywords and synonyms (Yang, para 118: “Constructing a semantic relationship graph based on mutual information to semantically expand the query can effectively improve the precision and recall rate during retrieval”).
Cohen in view of Lievens, Pathak, Jin, and Yang teaches wherein the process comprises: feeding the targeted object keyword list into a semantic graph expansion module to generate an expanded targeted object keyword list (Taught by the combination of Cohen with Jin and Yang, see description provided above and as follows). It would have been obvious to one of ordinary skill in the art to utilize the semantic graph expansion module, taught in combination with Jin and Yang, to also create additional entries in the targeted object keyword list of Cohen in order to improve the accuracy and precision of the object detector selector (para 100 of Jin and para 118 of Yang, cited above). Additionally, one of ordinary skill in the art would have been able to apply the semantic graph expansion module to the targeted object keyword list, of the remove request editing query, in the same way that the query teachings of Jin could be applied to the replace request editing query above (Refer to motivation to combine teachings, described above).
Regarding claim 8, all claim limitations are met and rendered obvious by Cohen in view of Lievens, Pathak, Jin, and Yang because the method steps of claim 8 are the same as the processes performed in claim 1. The Examiner notes that although both remove and replace commands are taught by the aforementioned prior art, the teaching of only one command is required for claim 8 because it is a method claim (see MPEP 2111.04(II) regarding contingent limitations).
Regarding claim 15, Cohen teaches a computer program product comprising a computer readable storage medium (Cohen, para 184: “Computer-readable storage media 806 is illustrated as including memory/storage 812. Storage 126 in FIG. 1 is an example of memory/storage included in memory/storage 812”; para 43: “storage 126”) having program instructions embodied therewith, the program instructions executable by a processor to cause the processors to perform a method (Cohen, processors 124, para 42: “image enhancement system 110 may be implemented at least partially by executing instructions stored on storage 126 on processors 124”). All further claim limitations are met and rendered obvious by Cohen in view of Lievens, Pathak, Jin, and Yang because the method steps performed in claim 15 are the same as the processes performed in claim 1.
Regarding claim 21 (dependent on claim 1), Cohen in view of Lievens, Pathak, Jin, and Yang teaches wherein generating the background in the target object area further comprises generating the background based on the latent representation such that the target object is removed leaving a latent space (Pathak, entire image, including the removed object area, is encoded to a latent representation, see FIG. 2) and a background generator samples a surrounding background of the latent space and fills in the latent space based on the surrounding background (Pathak, abstract: “trained to generate the contents of an arbitrary image region conditioned on its surroundings”; section 3 on pg. 2538: “We now introduce context encoders: CNNs that predict missing parts of a scene from their surroundings”).
Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen in view of Lievens, Pathak, Jin, Yang and Powles et al. (U.S. Patent No. 2024/0265159 A1), hereinafter Powles.
Regarding claim 4 (dependent on claim 1), Cohen in view of Lievens, Pathak, Jin, and Yang fails to teach a deepfake generator (Cohen uses an image gallery module with image libraries to generate replacement material, see para 45), and therefore fails to teach wherein the process further comprises: feeding the replacement object keyword list into a deepfake object generator selector; identifying a deepfake object generator corresponding to the replacement object keyword list; and generating the replacement object.
However, Powles teaches feeding the object keyword list (Powles, para 477: “text”; input parameters of the GAN aggregator in Fig. 5a) into a deepfake object generator selector (Powles, para 477: “GAN selection module”, part of the GAN aggregator);
identifying a deepfake object generator corresponding to the object keyword list (Powles, para 477: “The GAN aggregator is configured to determine which GAN is best suited to the task at hand”); and generating an object (Powles, para 477: “synthesise an image from the text”).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the GAN selection module of Powles with the replacement object keyword list in the system of Cohen in view of Lievens, Pathak, Jin, and Yang in order to improve the accuracy of the generated image (Powles, para 476: “multiple GAN modules are configured to generate the same features using different techniques, to get improved confidence of the generation accuracy”).
Regarding claim 11 (dependent on claim 8), all claim limitations are met and rendered obvious by Cohen in view of Lievens, Pathak, Jin, Yang, and Powles because the method steps of claim 11 are the same as the processes performed in claim 4.
Regarding claim 18 (dependent on claim 15), all claim limitations are met and rendered obvious by Cohen in view of Lievens, Pathak, Jin, Yang, and Powles because the method steps performed in claim 18 are the same as the processes performed in claim 4.
Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen in view of Lievens, Pathak, Jin, Yang, Powles, and Banerjee (U.S. Patent No. 2021/0081719 A1).
Regarding claim 5 (dependent on claim 4), Cohen in view of Lievens, Pathak, Jin, Yang, and Powles fails to teach wherein the identifying further comprises: determining that the deepfake object generator is unavailable; and generating, using a deepfake object generator factory, the deepfake object generator.
However, Banerjee teaches wherein the identifying further comprises:
determining that a deepfake object generator is unavailable (Banerjee, para 16: “the generative adversarial network may not be trained to generate images of every possible keyword or combination of keywords”); and
generating, using a deepfake object generator factory, the deepfake object generator (Banerjee, a refined generator is generated via training, para 16: “train the generator neural network of the generative adversarial network at runtime. The generative adversarial network may then generate images corresponding to any keyword or combination of keywords on demand”).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the deepfake object detector generator factory of Banerjee with the system of Cohen in view of Lievens, Pathak, Jin, and Yang in order to generate images of objects corresponding to any keyword (Banerjee, para 16, see above reference).
Regarding claim 12 (dependent on claim 11), all claim limitations are met and rendered obvious by Cohen in view of Lievens, Pathak, Jin, Yang, Powles, and Banerjee because the method steps of claim 12 are the same as the processes performed in claim 5.
Regarding claim 19 (dependent on claim 18), all claim limitations are met and rendered obvious by Cohen in view of Lievens, Pathak, Jin, Yang, Powles, and Banerjee because the method steps performed in claim 19 are the same as the processes performed in claim 5.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Bormann et al. (Bormann, R., Wang, X., Völk, M., Kleeberger, K., & Lindermayr, J. (2021, May). Real-time instance detection with fast incremental learning. In 2021 IEEE International Conference on Robotics and Automation (ICRA) (pp. 13056-13063). IEEE.) teaches a method where new object detector models are generated for new classes (abstract: “This paper introduces InstanceNet, an ensemble of efficient single-class instance detectors capable of fast and incremental adaptation to new object sets. Due to a dynamic sampling-based training strategy, accurate detection models for new objects can be obtained within less than 40 minutes on a consumer GPU”).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EMMA E DRYDEN/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677