DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to the Applicants’ Amendment/Remark filed on November 13, 2025. Claims 3, 5, 9, 12, 14 and 18 have been amended; claims 1-20 are currently presented in the instant application.
Response to Arguments
Applicant's arguments filed November 13, 2025 have been fully considered but they are not persuasive.
35 U.S.C Rejections
Regarding claim 1.
Applicant’s argument: Applicant argues on page 12 of the remark that “Claim 1 further recites, "accessing, by the computing device, selection criteria that is configured to select a portion of the training data." On page 3, the Office points to Banerjee's FIG. 4, which shows a block diagram for distilling n-element vectors from output values or elements of convolutional neural network (CNN) 410. The Office appears to equate Applicant's claimed "selection criteria" with values associated with facial features encoded in the n-elements vectors, selected for training GAN 460. However, these do not seem to be criteria configured to select a portion of the training dataset that Banerjee is trained on initially. That is, claim 1 recites that a portion of the training data is selected by applying the selection criteria to the training data previously introduced, but Banerjee feeds back to GAN 460 the n-element vectors derived from the CNN 410 output as new training data. Banerjee does this to "improve the detection of features and accuracy of inferences made by the CNN" (Banerjee, paragraph [0049]), but not to select a portion of the initial training dataset.
Rapowitz does not supply the teachings missing from Banerjee. Therefore, their combination, however motivated, fails to render obvious claim 1”.
Examiner’s response: Examiner respectfully disagrees with the argument because the combination of Banerjee and Rapowitz fairly discloses highlight claim invention (Banerjees, see pars. [0037]; [0049] and [0053]). Banerjees discloses a training dataset can include facial expression data, facial data, image data, audio data, physiological data, and so on. Weights can be trained on a set of layers for deep learning by applying a known good or “training” data set (Banerjee, see par. [0075]).
Examiner notes “The Office appears to equate Applicant's claimed "selection criteria" with values associated with facial features encoded in the n-elements vectors, selected for training GAN 460”, because the terms “selection criteria” or “portion of the training data” that Applicant claims are so broad. Applicant does not define what is the selection criteria, and how to select the portion of the training data… If Applicant further defines or clarifies these terms that would overcome the current arts rejections. Therefore, the argument is not persuasive.
Regarding claims 11 and 20, are similar to claim 1 (please see argument of claim 1 above).
Regarding claims 2-4, 6-8, 10-13, 15-17 and 19, depend on either independent claims 1 or 11. Therefore, for the reasons stated above and presented detailed action below, the rejection from first Office Action are maintained.
Claims 5, 9, 14 and 18 are objective (please see detail Office Action below)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4, 10-11, 13 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Banerjee et al. (US 20210201003 A1) in view of Rapowitz et al. (US 20240386138 A1).
Regarding claim 1. Banerjee discloses a computer-implemented method, comprising:
accessing, by a computing device, training data that includes attributes (Banerjee, see at least par. [0035], A generative adversarial network can be based on two neural networks which can compete with one another as part of a machine learning technique. The two neural networks can include a generator network and a discriminator network. The GAN can be presented with a training dataset. The GAN learns from the training dataset in order to generate new data or “synthetic data” candidates based on the training dataset. The synthetic data candidates include similar characteristics found within the training dataset. In a usage example, a training dataset includes facial element data such as facial expressions, intensities of expressions, positions of facial elements, etc., in images which can be used to train the GAN to generate synthetic data);
accessing, by the computing device, selection criteria that is configured to select a portion of the training data (Banerjee, see at least par. [0049] The weights of the CNN can be updated or trained in order to improve accuracy of inferences made by the CNN. The training of the CNN can include training for a given facial element or feature such as the smile, the eyebrow furrow, and so on. The block diagram 400 includes a feature selector 450. The feature selector can be used to select which of the plurality of values associated with the plurality of facial features encoded in the n-element vector can be used for training a component such as a GAN 460);
selecting, by the computing device, the portion of the training data by applying the selection criteria to the training data (Banerjee, see at least par. [0037] The facial images can include facial elements such as a facial expressions and intensities within the training dataset. The block diagram 302 can include a synthetic vector generator 342, which, for the purposes of updating discriminator weights, can be disabled or locked 344. The block diagram 302 includes a vector sampler 350 which can select a vector from the plurality of vector representations within the training dataset. The vector sampler 352 is disabled while the synthetic vector generator is locked 344. The block diagram 302 includes a discriminator 360.);
training, by the computing device, using machine learning, and using the portion of the training data, a model that is configured to generate given synthetic attributes of given synthetic personas (Banerjee, see at least par. [0076] Training data for a neural network is obtained, where the training data is processed on a machine learning system. The training data can include facial expression data, facial data, voice data, physiological data, and so on. Various components can be used for collecting the data, such as imaging components, microphones, sensors, and so on. The imaging components can include cameras, where the cameras can include a video camera, a still camera, a camera array, a plenoptic camera, a web-enabled camera, a visible light camera, a near-infrared (NIR) camera, a heat camera, and so on. The images and/or other data are used to train the neural network. The neural network can be trained for various types of analysis including image analysis, audio analysis, physiological analysis, and the like. The analysis can be performed on the neural network, where the neural network has been trained using machine learning. A deep learning neural network comprises layers, where each layer within the neural network includes nodes. The operation of the deep learning neural network can be modified or adapted by changing the values of weights associated with the nodes within each layer of the neural network. The changing of the weights associated with the nodes and layers within the neural network comprises retraining of the neural network. The retraining can be performed to improve the efficacy of the analysis for facial expressions, cognitive states, etc. The weights that are trained are deployed onto deep learning nodes of a device, such as a user device or a computing device, and the weights can be retrained over time or as necessary. The retraining can result from using further training data.);
receiving, by the computing device and from the model, the synthetic attributes of the synthetic personas (Banerjee, see at least par. [0032] FIG. 2 is a flow diagram for back-propagation. The flow 200, or portions thereof, can be implemented using one or more computers, processors, personal electronic devices, and so on. The flow 200 can be implemented using one or more neural networks. The flow 200 describes further training a GAN by generating synthetic vectors, evaluating an accuracy of the generated synthetic vectors, and improving the efficacy of the generation of the synthetic vectors by back-propagating an error function. The training the GAN can be based on facial elements of one or more people. In embodiments, the one or more people can be within one or more vehicles. The facial elements can comprise human drowsiness features. The back-propagating is based on synthetic data for neural network training using vectors. In the flow 200, the training a machine learning neural network further comprises using the one or more synthetic vectors 210. Discussed throughout, the synthetic vectors that can be generated can be based on classifying facial elements within facial images. The facial elements can include facial expressions such as smiles, frowns, smirks, or neutral expressions. The facial elements can include eyebrow furrows, head tilt, gaze direction, etc.).
Banerjee does not discloses providing, by the computing device and to the model, a request for synthetic attributes of synthetic personas. However,
Rapowitz discloses:
providing, by the computing device and to the model, a request for synthetic attributes of synthetic personas (Rapowitz, see at least par. [0056] For example, if user personal data was discovered at a user's blog, the computing device may post user synthetic personal data to the user's blog and/or to additional public sites, like webpages, friend's social media accounts, and/or the like. Further in this example, since the user personal data was found on user's own personal website, the computing device may request permission from the user to delete the blog post or part of the blog post where the user personal data was located. Additionally or alternatively, the generated synthetic personal data may be disseminated to a Tor, or dark web, network. Additionally or alternatively, the generated synthetic personal data may be disseminated to an intranet page.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Banerjee, with providing, by the computing device and to the model, a request for synthetic attributes of synthetic personas, as provide by Rapowitz. The modification provide an improved system and method for creating synthetic personas that appear to correspond to actual people. To build a synthetic persona, a user may need to create the various synthetic attributes that make up a synthetic persona, thereby to improve a user's privacy and security by monitoring an individual's or entity's publicly available personal data then implementing measures to either remove or obfuscate the information (Rapowitz, see par. [0003]).
Regarding claim 2. Banerjee in view of Rapowitz discloses the method of claim 1, and Banerjee in view of Rapowitz further discloses wherein the attributes of the personas are facial images of the persona (Banerjees, see at least par. [0068] Layers of a deep neural network can include a bottleneck layer 700. A bottleneck layer can be used for a variety of applications such as identification of a facial portion, identification of an upper torso, facial recognition, voice recognition, emotional state recognition, and so on. The deep neural network in which the bottleneck layer is located can include a plurality of layers. The plurality of layers can include an original feature layer 710. A feature such as an image feature can include points, edges, objects, boundaries between and among regions, properties, and so on. The deep neural network can include one or more hidden layers 720. The one or more hidden layers can include nodes, where the nodes can include nonlinear activation functions and other techniques. The bottleneck layer can be a layer that learns translation vectors to transform a neutral face to an emotional or expressive face. In some embodiments, the translation vectors can transform a neutral sounding voice to an emotional or expressive voice. Specifically, activations of the bottleneck layer determine how the transformation occurs. A single bottleneck layer can be trained to transform a neutral face or voice to a different emotional face or voice. In some cases, an individual bottleneck layer can be trained for a transformation pair. At runtime, once the user's emotion has been identified and an appropriate response to it can be determined (mirrored or complementary), the trained bottleneck layer can be used to perform the needed transformation.).
Regarding claim 4. Banerjee in view of Rapowitz discloses the method of claim 1, and Banerjee in view of Rapowitz further discloses wherein the attributes of the persona are voices of the personas (Banerjee, see at least par. [0050] FIG. 5 is a system diagram for an interior of a vehicle 500. Vehicle manipulation can be accomplished based on training a machine learning system. The machine learning system can include a neural network, where the neural network can be trained using one or more training data sets. The datasets for a person in a vehicle can be obtained. The collected datasets can include video data, facial data such as facial element data, audio data, voice data, physiological data, and so on. Collected data and other data can be augmented with synthetic data for neural network training using vectors. Facial images are obtained for a neural network training dataset. Facial elements from the facial images are encoded into one or more vector representations of the facial elements. A generative adversarial network (GAN) generator is trained to provide one or more synthetic vectors based on the one or more vector representations).
Regarding claim 10. Banerjee in view of Rapowitz discloses the method of claim 1, and Banerjee in view of Rapowitz further discloses comprising: providing, by the computing device and to the model, a selection of a synthetic attribute of the synthetic attributes and a request to adjust a characteristic of the synthetic attribute; and based on the selection of the synthetic attribute and the request to adjust the characteristic of the synthetic attribute, receiving, by the computing device and from the model, an updated synthetic attribute (Rapowitz, see at least par. [0051] At step 340, if the computing device determines that the similarity value does satisfy the second threshold, the computing device may present the generated synthetic personal data to the user for approval. The computing device may use a GUI to display the synthetic personal data on the user's device. The GUI may present options for the user to approve, disapprove, adjust, or cancel the synthetic personal data. If the user disapproves the generated synthetic personal data, the computing device may revert to step 315 and generate different synthetic personal data. Additionally or alternatively, when the difference value and/or the similarity value do not satisfy their respective thresholds, the model may be retrained using the techniques described above with respect to FIG. 2. Additionally or alternatively, the computing device may modify the first and/or second thresholds according to user feedback, as described above. [0052] If the user approves the generated synthetic personal data at step 340, the computing device may disseminate the generated synthetic personal data at step 345. The generated synthetic personal data may be disseminated by publishing the generated synthetic personal data to the one or more public sites where the user's personal data was discovered and/or one or more public sites where the information was not originally published. The computing device may publish the synthetic by submitting a comment, replying to a post, submitting a post, emailing or communicating with the publisher to request publication, or the like. The public sites may be located on the Internet, e.g., a webpage, a social media account, a news website, a blog post, or the like. Additionally or alternatively, the generated synthetic personal data may be disseminated to a Tor, or dark web, network, or an intranet page. Further, the user may select, via a GUI, which sites to post the synthetic personal data to).
Regarding claim 11. Banerjee discloses a system, comprising: one or more processors; and memory including a plurality of computer-executable components that are executable by the one or more processors (Banerjee, see FIG. 10 and par. [0084]) to perform same steps of the computer-implemented method of claim 1. Therefore, claim 11 is further rejected based on the same rationale as claim 1 set forth above and incorporated herein.
Regarding claim 13. The system of claim 13, perform same steps of the claim 4. Therefore, claim 13 is further rejected based on the same rationale as claim 4 set forth above and incorporated herein.
Regarding claim 19. The system of claim 19, perform same steps of the claim 10. Therefore, claim 19 is further rejected based on the same rationale as claim 10 set forth above and incorporated herein.
Regarding claim 20, Banerjee discloses One or more non-transitory computer-readable media of a computing device storing computer-executable instructions that upon execution cause one or more computers (Banerjee, see FIG. 10 and par. [0089]) to perform acts comprising same steps of the computer-implemented method of claim 1. Therefore, claim 20 is further rejected based on the same rationale as claim 1 set forth above and incorporated herein.
Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over
Banerjee et al. (US 20210201003 A1) in view of Rapowitz et al. (US 20240386138 A1), as applied claims 2 and 11 above and further in view of JOSEPH et al. (US 20230342487 A1).
Regarding claim 3. Banerjee in view of Rapowitz discloses the method of claim 2, and Banerjee in view of Rapowitz does not disclose wherein the selection criteria comprises color ranges of skin tone, color ranges of hair, color ranges of eyes, minimum frontal breadth, upper face height, height of forehead, face breadth, bigonia breadth, height of lower face, and total face height. However, JOSEPH discloses:
wherein the selection criteria comprises color ranges of skin tone, color ranges of hair, color ranges of eyes, minimum frontal breadth, upper face height, height of forehead, face breadth, bigonia breadth, height of lower face, and total face height (JOSEPH, see at least par. [0072] The features and/or attributes can include, for example, eye shape, eye color, eyelash length, nose shape, nose width, nose length, nose height, mount shape, mouth size, lip shape, lip size, lip color, arrangement of teeth, color of teeth, fillings, cheek shape, eyebrow shape, eyebrow color, eyebrow length, eyebrow width, hairstyle, hair color, hair thickness, hair type (e.g., straight or curly), baldness, facial hair style, facial hair color, facial hair thickness, skin tone (e.g., skin color), forehead size, face shape, head shape, jaw shape, or a combination thereof. In some examples, a synthesized face generated by the trained machine learning model(s) can look like a real human face, with correct features and proportions, but be unique from any of the human faces in the training data, and not be representative of any real person).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Banerjee, with wherein the selection criteria comprises color ranges of skin tone, color ranges of hair, color ranges of eyes, minimum frontal breadth, upper face height, height of forehead, face breadth, bigonia breadth, height of lower face, and total face height, as provide by JOSEPH. The modification provide an improved system and method for creating synthetic personas that appear to correspond to actual people. To build a synthetic persona, a user may need to create the various synthetic attributes that make up a synthetic persona, thereby to provide increased privacy and security for network-based interactive systems and/or other imaging systems at least in part by alternate faces (e.g., synthesized faces) for persons depicted in image data. The imaging systems and techniques described herein provide increased immersion, and/or do not detract from immersion, compared to other privacy-enhancing techniques such as face blurring, face pixelization, covering faces with black boxes, covering faces with cartoon avatar faces, inpainting, or combinations thereof (JOSEPH, see par. [0047]).
Regarding claim 12. The system of claim 12, perform same steps of the claims 2 and 3. Therefore, claim 12 is further rejected based on the same rationale as claims 2 and 3 set forth above and incorporated herein.
Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over
Banerjee et al. (US 20210201003 A1) in view of Rapowitz et al. (US 20240386138 A1), as applied claim 1 above and further in view of HELMINGER et al. (US 20210327038 A1).
Regarding claim 6. Banerjee in view of Rapowitz discloses the method of claim 1, and Banerjee in view of Rapowitz further discloses claim 2 as same step of claim 1, but Banerjee in view of Rapowitz does not discloses additional selection criteria and additional portion of the training data. However,
HELMINGER discloses:
additional selection criteria that is configured to select an additional portion of the training data (HELMINGER, see pars. [0057]-[0066], if the model trainer 116 determines that there are no additional facial identities, then the method 500 continues to step 508, where the model trainer 116 determines if additional training iterations are to be performed for the portion of the ML model 210. For example, the model trainer 116 could train the portion of the ML model 210 for a given number of iterations. [0066] On the other hand, if the model trainer 116 determines that there are no additional facial identities, then at step 608, the model trainer 116 determines if additional training iterations are to be performed for the portion of the ML model 310. For example, the model trainer 116 could train the portion of the ML model 310 for a given number of iterations).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Banerjees, with additional selection criteria that is configured to select an additional portion of the training data, as provide by HELMINGER. The modification provide an improved system and method for creating synthetic personas that appear to correspond to actual people. To build a synthetic persona, a user may need to create the various synthetic attributes that make up a synthetic persona, thereby to be effectively utilized to change or modify facial identities in high-resolution (e.g., megapixel) images or frames of a video. The disclosed techniques also enable multiple facial identities to be interpolated to produce novel combinations of facial identities, without requiring separate training of a machine learning model for each desired combination of facial identities. Further, using the disclosed techniques, models can be learned without requiring labeled data, which permits the disclosed techniques to be applied to a broader class of images and videos relative to prior art techniques. These technical advantages represent one or more technological improvements over prior art approaches.(HELMINGER, see par. [0009]).
Regarding claim 15. The system of claim 15, perform same steps of the claim 6. Therefore, claim 15 is further rejected based on the same rationale as claim 6 set forth above and incorporated herein.
Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over
Banerjee et al. (US 20210201003 A1) in view of Rapowitz et al. (US 20240386138 A1), further in view of HELMINGER et al. (US 20210327038 A1), as applied claim 6 above and further in view of ICKIN et al. (US 20230088561 A1).
Regarding claim 7. Banerjee in view of Rapowitz and further in view of HELMINGER discloses claim 6, but Banerjee in view of Rapowitz and further in view of HELMINGER does not disclose wherein the portion of the training data and additional portion of the training data do not share any of the training data. However,
ICKIN discloses:
wherein the portion of the training data and additional portion of the training data do not share any of the training data (ICKIN, see at least par. [0073] In non-FL cases, i.e., cases where neither sharing of data nor sharing of neural weights is possible, some embodiments can nevertheless provide additional training data based on a known data distribution. The accuracy of an isolated NN model accuracy may thereby be improved.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Banerjee, with wherein the portion of the training data and additional portion of the training data do not share any of the training data, as provide by ICKIN. The modification provide an improved system and method for creating synthetic personas that appear to correspond to actual people. To build a synthetic persona, a user may need to create the various synthetic attributes that make up a synthetic persona, thereby to increasing the initial model accuracy and/or decreasing the required number of iterations of communication between collaborators can improve model accuracy, decrease training time and/or reduce the network footprint needed for model training (ICKIN, see par. [0065]).
Regarding claim 16. The system of claim 16, perform same steps of the claim 7. Therefore, claim 16 is further rejected based on the same rationale as claim 7 set forth above and incorporated herein.
Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over
Banerjee et al. (US 20210201003 A1) in view of Rapowitz et al. (US 20240386138 A1), further in view of HELMINGER et al. (US 20210327038 A1), as applied claim 6 above and further in view of Yang et al. (US 20180285839 A1).
Regarding claim 8. Banerjee in view of Rapowitz and further in view of HELMINGER discloses claim 6, but Banerjee in view of Rapowitz and further in view of HELMINGER does not disclose wherein the portion of the training data and additional portion of the training data share some of the training data. However,
Yang discloses:
wherein the portion of the training data and additional portion of the training data share some of the training data (Yang, see at least par. [0003] The world of “Big Data” is full of many entities that do not particularly trust one another and compete directly but still benefit from mutual sharing of data. One such example of mutual benefit through data sharing is in the training of machine learning or AI modules. Machine learning applications improve with additional training data; thus, sharing of training data between parties improves the overall function of these modules. Despite the clear mutual benefit, where the parties do not have reason to trust one another, precautions must be taken.).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method and apparatus of Banerjee, with wherein the portion of the training data and additional portion of the training data do not share any of the training data, as provide by Yang. The modification provide an improved system and method for creating synthetic personas that appear to correspond to actual people. To build a synthetic persona, a user may need to create the various synthetic attributes that make up a synthetic persona, thereby to provide a systematic way to allow different parties to share information and train AI models using the right data over the entire world. The proposed data management system utilizes blockchain technology to provide a public environment that engages different parties to share data and train AI models (Yang, see par. [0053]).
Regarding claim 17. The system of claim 17, perform same steps of the claim 8. Therefore, claim 17 is further rejected based on the same rationale as claim 8 set forth above and incorporated herein.
Allowable Subject Matter
Claims 5, 9, 14 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Banerjee et al. (US 20210201003 A1) in view of Rapowitz et al. (US 20240386138 A1), as applied claims 1 and 11 above and further in view of Lin et al. (US 20220122306 A1).
Regarding claims 9 and 18.
for each attribute in the training data: determining a value of the characteristic of the attribute (Banerjee, see par. [0055])
based on the selection criteria, determining a range or threshold for a
characteristic of the attributes (Lin, see at least par. [0063]),
comparing the value of the characteristic to the range or threshold; and
based on comparing the value of the characteristic to the range or threshold,
determining whether to select the attribute for inclusion in the portion of the
training data (Lin, see at pars. [0058]). However, the limitations:
wherein the training data includes images and selecting the portion of the training data by applying the selection criteria to the training data comprises:
analyzing, by a training data selector, the images in the training data for characteristics of the attributes, one or more of the images each having an attribute in common;
determining, by the training data selector, the value of the characteristic of the attribute in each of the one or more images taken as a whole render the claims patentably distinct over prior arts.
Claims 5 and 14 each distinguish over the prior art at least to their dependencies.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIM THANH THI TRAN whose telephone number is (571)270-1408. The examiner can normally be reached Monday-Friday 8:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALICIA HARRINGTON can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KIM THANH T TRAN/Examiner, Art Unit 2615
/JAMES A THOMPSON/Primary Examiner, Art Unit 2615