DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to amendments filed November 19th, 2025. The status of the claims is as follows. Claims 1, 3, 5, 7, 9, 11 are amended and Claims 2 and 8 are cancelled. Claims 1, 3-7, 9-12 are currently pending.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 5 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claim 5,
Claim 5 recites the limitation “the personal identification processing unit” in line 2 of Claim 5. There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2 and 3; 7, 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Krogmann (US5247584A1) in view of Jänen et al. (“Muti-object tracking using feed-forward neural networks” [2010], hereinafter “Jänen”) in view of Liu et al. (CN113221671A, hereinafter “Liu”).
Regarding Claim 1,
Krogmann discloses a personal identification method using a computer comprising a plurality of deep neural network models, the method comprising: receiving a plurality of wireless signals including spatial information and identification information of an object to be identified, by means of a plurality of receivers in different positions, by a wireless signal collecting unit; (Krogmann [Column 3 Line 12]; “Numerals 10,12, and 14 designate three sensors which operate with radar, laser beams and infrared radiation. As indicated, further sensors different from the sensors 10,12 and 14 can be provided. The sensors 10,12, and 14 are image-detecting sensors. In the respective wavelength range they supply an image of the field of view with the object to be recognized. In this way, the sensors detect not only the contours of the object but different characteristics … Also the movement of the object in the field of view is dected: For exemple, it is determined whether the object moves quickly or slowly or not at all, rectilinearly or on a curved path. All of these characteristics can be used for identifying and classifying the object.”
Krogmann [Figure 1];
PNG
media_image1.png
430
668
media_image1.png
Greyscale
wherein the plurality of sensors transmitting spatially related signals to the neural network processing system thus reads on receiving a plurality of signals from multiple receivers located in different positions)
generating a manipulation signal from the plurality of wireless signals by processing the spatial information by means of a first deep neural network model of the plurality of deep neural network models, wherein the first deep neural network model is trained in advance; (Krogmann [Column 2 Line 19]; “According to the invention, the signal processing unit comprises sensor signal input means for sensors from a plurality of sensors responding to objects to be classified, and pairs of first neural networks, each of said pairs being associated with one of said signal sensor input means and arranged to receive information from said one of said sensor input means. A first neural network of each pair is trained to process a predetermined characteristic of said object to be classified. A second neural network of each pair is trained to process movement or special data. Said first neural networks, provide feature vectors composed of detection and identification information specific for the repective ones of said sensors. Said second neural networks provide feature vectors composed of movement information specific for the respective ones of said sensors.” wherein the first neural networks generating feature vectors composed of detection, movement, and identification information reads on generation of a manipulation signal by processing the spatial information by means of a trained first neural network model; wherein the existence of pairs of neural networks reads on the first deep neural network model being of the plurality of deep neural network models)
and identifying the object to be identified in a specific space with identification information of the object to be identified of the manipulation signal as an input of a second deep neural network model of the plurality of deep neural network models.(Krogmann [Column 2 Line 35]; “Said feature vectors are applied to a third neural network. Said third neural network is adapted and trained to determine the associations of said feature vectors of said detection, identification and movement information to provide association information. Furthermore there are identifying and classifying means for identifying and classifying said object. Said identifying and classifying means are arranged to receive said association information and to provide final identification information” wherein means the third neural network using the feature vectors comprising identification information to identify the object in the image space reads on a unit comprising a second deep neural network model receiving identification information of the object
Krogmann [Column 3 Line 12]; “Numerals 10,12, and 14 designate three sensors which operate with radar, laser beams and infrared radiation. As indicated, further sensors different from the sensors 10,12 and 14 can be provided. The sensors 10,12, and 14 are image-detecting sensors. In the respective wavelength range they supply an image of the field of view with the object to be recognized. In this way, the sensors detect not only the contours of the object but different characteristics such as the temperature distribution over the surface of the object, the reflection factor for radar, radiation and its distribution or the distribution of the reflection factor for the laser beams. Also the movement of the object in the field of view is dected: For exemple, it is determined whether the object moves quickly or slowly or not at all, rectilinearly or on a curved path. All of these characteristics can be used for identifying and classifying the object” wherein the signals transmitted between sensors for utilization in a neural network read on such signals being wirelessly transmitted to a neural network)
Krogmann fails to explicitly disclose but Jänen discloses to compare different spatial information in order to maintain a common parameter having the same value (Jänen [Page 177 Section III Subsection A]; “A. Architecture of Object Labeling The object detector is assumed to detect the objects present in the actual frames. The features describing the objects are extracted afterwards. The object labeling consists of a pre-processing element, an ANN and a collator, see Fig. 2. The pre-processing element prepares the actual object features and the features of labeld objects so that they can be handled by the ANN. The ANN compares the object features of each ol and odt and evaluates the affinity of actual objects odt to tracked objects ol. The Collator conducts the results of the ANN.
B. Feature Extraction The label distributor uses low level features, because we want to design a generic label distributor. It can be assumed that the following features are retrievable by almost every detector type:
position p⃗ t in frame coordinate system (fcs)
object dimensions dt (width, height) in fcs
… Known objects ol have a history of features which can be considered as a First In - First Out Memory. Each time an object has been recognized, the actual features are stored. The history has to be set to an reasonable size N. Pre-Evaluations showed that a history of 10 slots per object provides good results. Additionally, known objects ol already have a label which will be reassigned and a time-to-live (TTL).” wherein the comparison of low level features including position and object dimensions of different objects with objects of the same label having the same features reads on a comparison of different spatial information to maintain a common parameter having the same value)
and remove an individual parameter having different values to generate the manipulation signal (Jänen [Page 178 Section III Subsection B]; “Additionally, known objects ol already have a label which will be reassigned and a time-to-live (TTL). The TTL is used to cleanup objects which have not been detected for a defined time. If the object ol has been detected in the actual frame the TTL is set to a maximum value TTLmax. Every time an object ol cannot be assigned to a detected object odt, the TTL will be decremented. When reaching zero, the object will be removed from the list of known objects Ol.” wherein the removal of objects with different features to produce the list of objects and their identified labels is performed)
It would have been obvious to modify Krogmann’s method of processing, with a neural network, spatial information of received wireless signals from receivers for identification of an object in a specific space by comparing and adjusting received spatial information depending on their values. One would have been motivated to do so because “The features of actual detected objects and the features of already known respectively tracked objects have to be pre-processed in such way that they can be handled by the ANN. It makes sense to operate on comparison metrics of actual and known feature values … to smooth the feature values” (Jänen [Page 178 Section III Subsection C])
The combination of Krogmann/Jänen fails to explicitly disclose but Liu discloses wherein the wireless signals are each able to pass through a wall (Liu [Abstract]; “The invention discloses an environment-independent action recognition method and system based on gradients and wireless signals. The wireless signals are used to collect user action signal samples including multiple environments, an environment recognition deep neural network is trained, and the trained environment recognition network is used. The gradient of the input signal sample is calculated, and the gradient is multiplied by a weight and added to the original signal sample to reduce the influence of environmental interference. The processed signal sample can be used to train an environment-independent action classifier to realize action recognition. The invention also provides an action recognition system, which includes a signal acquisition module, an environment recognizer, a data processing module, an action recognizer, and the like.”
Liu [Detailed Description Step 1]; “The signal samples of the multi-action collected in a plurality of environments are collected, and each action is used for collecting data in each environment. For example, in a WiFi signal-based motion recognition system deployed in a laboratory, home, and classroom environment, if three motions "walking", "standing", and "falling" need to be recognized, a user needs to collect samples of walking, standing, and falling signals in the laboratory, home, and classroom environments, respectively.” wherein the wireless signal being a WiFi signal thus implicitly reads on the wireless signals being passable through a wall)
It would have been obvious to modify Krogmann/ Jänen’s method of processing, with a neural network, spatial information of received wireless signals from receivers for identification of an object in a specific space by ensuring that the wireless signals are WiFi signals passable through walls. One would have been motivated to do so because “Conventional motion recognition methods, such as camera-based, device-carried, and sonar radar-based methods, have their own drawbacks, such as leakage of visual privacy, inconvenience of wearing the device, and narrow sensing range. To address the deficiencies of conventional motion recognition methods, wireless signals (e.g., WiFi signals) are used” (Liu [Background Paragraph 1])
Regarding Claim 3,
The combination of Krogmann/Jänen/Liu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). The combination already discloses determining whether the individual parameter is removed from the manipulation signal by means of a third deep neural network model of the plurality of deep neural network models (Jänen [Page 177 Figure 2];
PNG
media_image2.png
168
339
media_image2.png
Greyscale
Jänen [Page 177 Section III Subsection B]; “B. Feature Extraction
The label distributor uses low level features, because we want to design a generic label distributor. It can be assumed that the following features are retrievable by almost every detector type:
position p⃗ t in frame coordinate system (fcs)
object dimensions dt (width, height) in fcs
… Known objects ol have a history of features which can be considered as a First In - First Out Memory. Each time an object has been recognized, the actual features are stored. The history has to be set to an reasonable size N. Pre-Evaluations showed that a history of 10 slots per object provides good results. Additionally, known objects ol already have a label which will be reassigned and a time-to-live (TTL).”
Jänen [Page 178 Section III Subsection C]; “Position: Using the object history, the object position of ol is estimated by linear interpolation (timebase is framecount). The prediction is useful, because if an object has not been detected for some frames, the position will be interpolated in this way.” wherein the determination of removing the parameter based upon the position feature estimated through an ANN is performed)
Claims 7, 9 recites an apparatus to perform the process of Claims 1, 3. Thus, Claims 7, 9 are rejected for reasons set forth in the rejection of Claims 1, 3.
Claims 4 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Krogmann (US5247584A) in view of Jänen et al. (“Muti-object tracking using feed-forward neural networks” [2010], hereinafter “Jänen”) in view of Liu et al. (CN113221671A, hereinafter “Liu”) and further in view of Cricrì et al. (US20190122072, hereinafter “Cricrì”).
Regarding Claim 4,
The combination of Krogmann/Jänen/Liu teaches the method of Claim 3 (and thus the rejection of Claim 3 is incorporated). The combination fails to explicitly disclose but Cricrì discloses wherein the first deep neural network model and the third deep neural network model are mutually trained by means of a generative adversarial network (GAN) (Cricrì [0018]; “Object detection comprises automatic analyzing of an image to determine whether an object of a certain object class such as chair, car, computer, etc. is present in the image. In many cases, that is not enough and there is a further need to re-detect i.e. re-identify an object i.e. a so-called target object from a second image after detecting it from a first image. This operation may be called as object re-identification. Object re-identification is a task of assigning unique identifying information (ID) to different views of the same object in at least two different images … Artificial neural networks may be used for first extracting features and second for classifying the extracted features to object classes and third perform object re-identifying. One approach for the analysis of data is deep learning. Deep learning is an area of machine learning which involves artificial neural networks. Deep learning typically involves learning of multiple layers of nonlinear processing units, either in supervised or in unsupervised manner. These layers form a hierarchy of layers, which represents the artificial neural network (also referred to just as neural network). Each learned layer extracts feature representations from the input data, where features from lower layers represent low-level semantics, and features from higher layers represent high-level semantics (i.e. more abstract concepts)”
Cricrì [0022]; “The training according to an embodiment may be performed by using a Generative Adversarial Network (GAN). A GAN is a special kind of deep neural network that allows generating consistent content samples. A GAN includes at least two neural networks: a generator and a discriminator that compete against each other, where the generator generates content and the discriminator distinguishes between generated and real content. In GANs, the discriminator neural network is used for providing the loss to the generator network, in a minimax-like game. Other variants of GANs exist, where, for example, the discriminator neural network is considered a critic, which does not classify between real and generated samples, but provides an estimate of the distance between the probability distributions of the real training data and the generated data” wherein object identification is performed using deep neural network models mutually trained as a discriminator and generator models by a generative adversarial network)
It would have been obvious modify Krogmann/Jänen/Liu’s method of comparing and adjusting neural-network processed spatial information of received wireless signals from receivers for identification of an object by performing the processing with specifically a generative adversarial network (GAN). One would have been motivated to do so because “By providing generated object samples which are very similar to target object samples it is possible to make the task of the re-identification model difficult and thus train it more effectively” (Cricrì [0030])
Claim 10 recites an apparatus to perform the process of Claim 4. Thus, Claim 10 is rejected for reasons set forth in the rejection of Claim 4.
Claims 5 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Krogmann (US5247584A) in view of Jänen et al. (“Muti-object tracking using feed-forward neural networks” [2010], hereinafter “Jänen”) in view of Liu et al. (CN113221671A, hereinafter “Liu”) and further in view of Ribalta et al. (US20210397943, hereinafter “Ribalta”).
Regarding Claim 5,
The combination of Krogmann/Jänen/Liu teaches the method of Claim 3 (and thus the rejection of Claim 3 is incorporated). The combination does not explicitly disclose but Jänen further discloses wherein the personal identification processing unit feeds back a personal identification accuracy which is an identification result of the object to be identified to the manipulation signal generating unit according to a predetermined period (Jänen [Page 180 Section VI]; “In this chapter the robustness of the developed label distributor will be examined with regard to the effect of the detection rate in several videos. We used two benchmark videos [10] and [11]. The first one displays a soccer game. The second video is a scene of a mall, see Fig. 6 and Fig. 7. To test the video with different detection rates it is necessary to use the ground truth of the video data. The ground truth data also contains objects which cannot be seen by the camera, e.g. they are occluded. Therefore we wanted to evaluate the videos at real constraints, the objects which cannot be detected by a HOG-Detector [1] were not used. A failure of the label distributor is defined as a change of the label of an object while it is present in the scene. For the soccer game and the mall video we used the parameters listed in Tab. II and Tab. III. A. Exemplary Result To demonstrate the capacity of this algorithm we selected a scene of the mall video [11] exemplarily. During the first 300 frames two persons revolve around each other, see Fig. 8, and the labels were assigned correctly. B. Dependency to the Detection Rate In this sub-section we evaluate the dependency of detection rate on the success of the label distributor. Because we used the ground truth data, we are able to evaluate several detection rates. We evaluated a probabilistic detection rate of 100% decreasing to 10% in decrements of 10% on the mall [11] and the soccer [10] video. At a frame rate of 25fps and a detection probability of 20%, the object will be detected in every fifth frame by average. To investigate the dependence, the ratio between all correct assignments (through the whole video) and the sum of all detected objects is considered. It is shown in Fig. 9 that the labeling rates are very high with more than 99%, even at a detection rate of 40%.” wherein the probabilistic detection rate for each object based on the labels distributed through the Feature Extraction label distribution ANN reads on personal identification accuracy comprising an identification result of the object to be identified to the generated manipulation signal.
Jänen [Page 179 Section IV]; “There are many training algorithms for disposal, e.g. Backpropagation, Resilient Propagation, Quick Propagation etc. These algorithms do not have a significant effect on the quality of the results of the training, rather on the time required for training. Training neuronal networks is a trial and error process based on experience. We selected Backpropagation as online variant, which means the network will be adapted after each training datum. The training is done on three self-made videos. Two of them displaying a floor with two persons.”)
It would have been obvious to modify Krogmann/Jänen/Liu’s method of comparing and adjusting neural-network processed spatial information of received wireless signals from receivers for identification of an object by feeding back the personal identification accuracies of each identified object. One would have been motivated to do so because “by receiving personal identification accuracies of each identified object, one is able to “evaluate the dependency of detection rate on the success of the label distributor. Because we used the ground truth data, we are able to evaluate several detection rates” (Jänen [Page 181 Section VI Subsection B])
The combination of Krogmann/Jänen/Liu does not explicitly disclose but Ribalta discloses the first deep neural network model is trained so as not to process the identification information of the object to be identified when the personal identification accuracy is continuously reduced during a predetermined reference period (Ribalta [0075]; “ In at least one embodiment, neural network model was trained using an Adam optimizer employing an initial learning rate of 10.sup.−3, with β0=0.9 and β1=0.999 … using early stopping, terminating training after five iterations without decrease of validation loss”
Ribalta [0143]; “ In at least one embodiment, DLA(s) may quickly and efficiently execute neural networks, especially CNNs, on processed or unprocessed data for any of a variety of functions, including, for example and without limitation: a CNN for object identification and detection using data from camera sensors” wherein the neural network trained with the early stopping regularization technique reads on training a neural network wherein object identification information is only processed when its validation loss is not decreasing (personal identification accuracy is constantly increasing)
It would have been obvious to modify Krogmann/Jänen/Liu’s method of comparing and adjusting neural-network processed spatial information of received wireless signals from receivers for identification of an object to incorporate Ribalta’s “early-stopping” regularization technique so as not to process object identification information when the personal identification accuracy is reducing. One would have been motivated to do so to ensure that at some point during Krogmann/Jänen/Liu’s model training, “convergence was detected” (Ribalta [0075]) and the model does not continue training after it is determined to be at a point where further iterations do not significantly impact parameter updates.
Claim 11 recites an apparatus to perform the process of Claim 5. Thus, Claim 11 is rejected for reasons set forth in the rejection of Claim 5.
Claims 6 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Krogmann (US5247584A) in view of Jänen et al. (“Muti-object tracking using feed-forward neural networks” [2010], hereinafter “Jänen”) in view of Liu et al. (CN113221671A, hereinafter “Liu”) and further in view of Marks et al. (US20220405317A1, hereinafter “Marks”).
Regarding Claim 6,
The combination of Krogmann/Jänen/Liu teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). Krogmann/Jänen/Liu does not explicitly disclose but Marks discloses wherein the wireless signal is an ultra-wideband (UWB) signal which is an ultra-wideband wireless signal (Marks [0186]; “ At 804, the handheld remote control device can determine position information descriptive of the environment. The position information can include depth information and/or distance information describing a depth of objects and/or features relative to a fixed position and/or distances between objects or features in the environment. In example embodiments, the position information can be determined using image recognition processing and/or image segmentation. In some implementations, the system may use ultra-wideband technology to determine the position information. The position information can be descriptive of the position of objects or features within the environment. By way of example, depth and/or spatial information associated with objects or features within the environment may be determined based on image data. The depths may be determined using image segmentation and/or other image analysis techniques. The depths can be determined using additional inertial data generated by one or more inertial sensors in some examples. As another example, UWB data may be determined at 804 that represents depth and/or spatial information of the environment” wherein determination of position information of identified objects or features within an environment through ultra-wideband sensor data reads on the wireless signal being an ultra-wideband signal)
It would have been obvious to modify Krogmann/Jänen/Liu’s method of using a neural network to process spatial information of received wireless signals from receivers for identification of an object to use Marks’ ultra-wideband wireless signals for receiver transmissions. One would have been motivated to do so because “UWB data can be descriptive of a distance between a handheld remote control device and each of the plurality of controllable devices” (Marks [0058]).
Claim 12 recites an apparatus to perform the process of Claim 6. Thus, Claim 12 is rejected for reasons set forth in the rejection of Claim 6.
Response to Arguments
The Examiner acknowledges the Applicant’s amendments in which Claims 1, 3, 5, 7, 9, 11 are amended.
Applicant’s arguments filed November 19th, 2025, traversing the rejection of claims 1, 3-7, 9-12 under 35 U.S.C. § 112(a) have been fully considered, and are fully persuasive.
Applicant’s arguments filed November 19th, 2025, traversing the rejection of claims 1, 3-7, 9-12 under 35 U.S.C. § 112(b) have been fully considered, and are fully persuasive. However, amendments have raised new 112(b) issues due to lack of antecedent basis in Claim 5, specifically regarding “the personal identification processing unit”.
Applicant’s arguments filed November 19th, 2025, traversing the rejection of claims 1, 3-7, 9-12 under 35 U.S.C. § 101 have been fully considered, and are fully persuasive.
Applicant’s arguments regarding the 35 U.S.C. § 102(a)(1) and 35 U.S.C. § 103 rejection of claims 1, 3-7, 9-12 of the previous office action have been considered, but are not fully persuasive.
Applicant recites on page 7 of remarks that the disclosure of Jänen in view of Krogmann cannot teach or suggest processing signals from multiple receivers each located in a different position, as now recited in the claims.
Examiner respectfully disagrees. Examiner determined that primary reference Krogmann already discloses processing signals from multiple receivers each located in a different position (Krogmann [Figure 1];
PNG
media_image1.png
430
668
media_image1.png
Greyscale
wherein the plurality of sensors transmitting spatially related signals to the neural network processing system thus reads on receiving a plurality of signals from multiple receivers located in different positions)
Thus, the combination of Krogmann/Jänen/Liu discloses a neural network trained to compare spatial information from a plurality of wireless signals respectively received by a plurality of receivers each having a different position in order to maintain a common parameter have a same value in each wireless signal and remove an individual parameter having different respective values in the wireless signals. As such, Claim 1 is no longer rejected under 35 U.S.C. § 102(a)(1) but instead 35 U.S.C. § 103.
The rejection of Claim 1 under 35 U.S.C. § 103 has been maintained. Similarly, the rejection of Claim 7 under 35 U.S.C. § 103 has been maintained.
The rejection of Claims 3-6 under 35 U.S.C. § 103, which depend directly or indirectly from Claim 1, have been maintained.
The rejection of Claims 9-12 under 35 U.S.C. § 103, which depend directly or indirectly from Claim 7, have been maintained.
Conclusion
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN J KIM whose telephone number is (571)272-0523. The examiner can normally be reached 8-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt El can be reached on (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN J KIM/Examiner, Art Unit 2141
/MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141