DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application claims priority to foreign application with application number PCT/US23/63969 dated 03/08/2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 07/08/2025, 08/11/2025, and 10/10/2025 has been considered and placed in the application file.
Response to Arguments
The rejection under 35 U.S.C. 112(b) have been withdrawn in light of the Applicant’s amended claims.
Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Interpretation
Claims 3 and 16 recites the limitation “the remote device is operated by a remote fleet operator”. For examination purposes, the examiner will interpret “remote fleet operator” as a fleet manager, a person who is monitoring the stream.
Claims 7, 11, and 13 recite “viewport”. For examination purposes, the examiner will interpret “viewport” as a display device.
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“wherein the method is performed by at least one of a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for 3D assets;
a system for performing deep learning operations;
a system for performing real-time streaming;
a system for presenting one or more of augmented reality content, virtual reality content, or mixed reality content;
a system for performing conversational Al operations;
a system for generating synthetic data;
in claim 10;
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4, 6-7, 10-11, 13, 16-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ueda (US 20190361436 A1).
Regarding claim 1, Ueda discloses a method comprising: issuing a remote communication to an ego-object in a physical environment from a remote device in a remote location outside the physical environment (Ueda, paragraph [0066], “Furthermore, wireless LAN router 2a receives a signal transmitted from remote control device 50 to autonomous vehicle control device 10 via router 2d of remote monitoring center 5 and the Internet 2c, and transmits the signal to autonomous vehicle control device 10”),
receiving, from the ego-object, using the remote device, a stream of data representative of the physical environment (Ueda, paragraph [0097], "Remote control device 50 receives the sensed data (S20), generates a monitoring picture based on the sensed data, and displays the monitoring picture on display 54 (S21)."), the stream of data being generated using sensor data that was captured by one or more sensors of the ego-object based at least on the remote communication (Ueda, paragraph [0070], “Visible light cameras 21 are placed in at least four positions in front, rear, right and left of the vehicle. The front picture, the rear picture, the left picture, and the right picture shot by these four visible light cameras 21 are combined to generate a bird's eye picture.”),
and causing a display visible to a remote operator of the remote device to present a visualization of the data stream representative of the physical environment (Ueda, paragraph [0097], "Remote control device 50 receives the sensed data (S20), generates a monitoring picture based on the sensed data, and displays the monitoring picture on display 54 (S21).").
Regarding claim 2, Ueda discloses the method of claim 1, wherein the stream of data representative of the physical environment visualized on the display visible to the remote operator of the remote device comprises, for at least one time slice of one or more time slices: a projection or a stitched image of the physical environment, (ii) a three-dimensional (3D) model of the physical environment, or (iii) a surround view visualization generated using the 3D model (Ueda, paragraph [0071], "Furthermore, a three-dimensional modeling picture of the surrounding of the vehicle can be generated.").
Regarding claim 4, Ueda discloses the method of claim 1, wherein the remote communication from the remote device designates a view, and the data stream representative of the physical environment comprise, for at least one time slice of one or more time slices: a surround view visualization of the physical environment generated by the ego-object (Ueda, paragraph [0071], "By placing a plurality of LIDARs 22 or mobile LIDAR 22, the moving speed of the object can also be measured. Furthermore, a three-dimensional modeling picture of the surrounding of the vehicle can be generated.") using the view designated by the remote communication (Ueda, paragraph [0124], "Remote control device 50 receives the sensed data (S20), generates a monitoring picture based on the sensed data, and displays the monitoring picture on display 54 (S21)").
Regarding claim 6, Ueda discloses the method of claim 1, wherein the data stream representative of the physical environment received by the remote device comprises a view, rendered by the ego-object, of a projection of LiDAR or RADAR data captured by the ego-object (Ueda, paragraph [0071], "By placing a plurality of LIDARs 22 or mobile LIDAR 22, the moving speed of the object can also be measured. Furthermore, a three-dimensional modeling picture of the surrounding of the vehicle can be generated.").
Regarding claim 7, Ueda discloses the method of claim 1, wherein the data stream representative of the physical environment comprises a visualization of the environment rendered through a viewport directed to view a salient event detected in the physical environment by the ego-object (Ueda, paragraph [0098], "When an event requiring an emergency stop occurs (Y in S11), autonomous vehicle control device 10 stops autonomous vehicle 1 (S12), and transmits an emergency stop signal to remote control device 50 via network 2 (S13). Also after the emergency stop, autonomous vehicle control device 10 continues to transmit the sensed data sensed by sensing unit 20 to remote control device 50 (S14).").
Regarding claim 10, Ueda discloses the method of claim 1, wherein the method is performed by at least one of a control system for an autonomous or semi-autonomous machine;
a perception system for an autonomous or semi-autonomous machine;
a system for performing simulation operations;
a system for performing digital twin operations;
a system for performing light transport simulation;
a system for performing collaborative content creation for 3D assets;
a system for performing deep learning operations;
a system for performing real-time streaming; a system for presenting one or more of augmented reality content, virtual reality content, or mixed reality content;
a system implemented using an edge device; a system implemented using a robot;
a system for performing conversational Al operations;
a system for generating synthetic data;
a system incorporating one or more virtual machines (VMs);
a system implemented at least partially in a data center;
or a system implemented at least partially using cloud computing resources (Ueda, paragraph [0097], "Autonomous vehicle control device 10 transmits sensed data sensed by sensing unit 20 to remote control device 50 via network 2 (S10). Remote control device 50 receives the sensed data (S20), generates a monitoring picture based on the sensed data, and displays the monitoring picture on display 54 (S21).", the remote control device is a control system for an autonomous machine/vehicle).
Regarding claim 11, Ueda discloses one or more processors (Ueda, paragraph [0074], “As the hardware resource, a processor, ROM (Read-Only Memory), RAM (Random-Access Memory), and other LSI (Large-Scale Integration) can be employed. As the processor, CPU (Central Processing Unit), GPU (Graphics Processing Unit), DSP (Digital Signal Processor), and the like, can be employed”), comprising: one or more circuits (Ueda, paragraph [0358], “The vehicle (1) includes an imaging circuit (21) configured to shoot a surrounding in at least a traveling direction of the vehicle (1), and a wireless communication circuit (131a) configured to transmit an image shot by the imaging circuit (21)”) to:
issue, to an ego-object in a physical environment, from a remote device in a remote location outside the physical environment, a remote communication defining a viewport (Ueda, paragraph [0066], “Furthermore, wireless LAN router 2a receives a signal transmitted from remote control device 50 to autonomous vehicle control device 10 via router 2d of remote monitoring center 5 and the Internet 2c, and transmits the signal to autonomous vehicle control device 10”),
receive, from the ego-object, by the remote device, a stream of surround view visualizations rendered by the ego-object through the viewport defined by the remote communication (Ueda, paragraph [0071], "By placing a plurality of LIDARs 22 or mobile LIDAR 22, the moving speed of the object can also be measured. Furthermore, a three-dimensional modeling picture of the surrounding of the vehicle can be generated.")
and cause a display visible to an operator of the remote device to present the stream of surround view visualizations of the physical environment (Ueda, paragraph [0097], "Autonomous vehicle control device 10 transmits sensed data sensed by sensing unit 20 to remote control device 50 via network 2 (S10). Remote control device 50 receives the sensed data (S20), generates a monitoring picture based on the sensed data, and displays the monitoring picture on display 54 (S21).").
Regarding claim 13, Ueda discloses the one or more processors of claim 11, the one or more circuits further to: detect, by the remote device, a salient event in the physical environment using sensor data received in the stream from the ego-object during a prior time slice (Ueda, paragraph [0090], "When an event requiring the emergency stop occurs, transmission data amount adjuster 114 controls such that all types of sensed data sensed by sensing unit 20 are transmitted to remote control device 50. Therefore, picture data are also included in the transmission target. In addition, when an event requiring the emergency stop occurs, transmission data amount adjuster 114 controls such that picture data with the highest picture quality is transmitted to remote control device 50"),
orient the viewport toward the salient event detected in the physical environment by the remote device; and issue the remote communication directing the ego-object to render the surround view visualizations through the viewport (Ueda, paragraph [0094], Fig. 6b below, "Picture generator 511 generates a picture to be displayed on display 54, based on sensed data received from autonomous vehicle control device 10, and two-dimensional or three-dimensional map data", the camera faces the salient event).
PNG
media_image1.png
373
507
media_image1.png
Greyscale
Claim 15 corresponds to claim 10, additionally reciting one or more processors (Ueda, paragraph [0074], “As the hardware resource, a processor, ROM (Read-Only Memory), RAM (Random-Access Memory), and other LSI (Large-Scale Integration) can be employed. As the processor, CPU (Central Processing Unit), GPU (Graphics Processing Unit), DSP (Digital Signal Processor), and the like, can be employed”). Thus, claim 15 is rejected for the same reasons of anticipation as claim 10.
Claims 16-17 and 20 corresponds to claims 1-2 and 10 respectively, additionally reciting a system (Ueda, paragraph [0063], “FIG. 1 is a diagram showing an entire configuration of a remote self-driving system according to the first exemplary embodiment of the present disclosure”), comprising:
one or more processors (Ueda, paragraph [0074], “As the hardware resource, a processor, ROM (Read-Only Memory), RAM (Random-Access Memory), and other LSI (Large-Scale Integration) can be employed. As the processor, CPU (Central Processing Unit), GPU (Graphics Processing Unit), DSP (Digital Signal Processor), and the like, can be employed”). Thus, claims 16-17 and 20 are rejected for the same reasons of anticipation as claims 1-2 and 10 respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda (US 20190361436 A1) in view of Maestas (US 9315152 B1).
Regarding claim 3, Ueda discloses the method of claim 1, wherein the ego-object is a vehicle, the remote operator is a remote fleet operator (Ueda, paragraph [0095], “In the remote self-driving system according to this exemplary embodiment, it is supposed that a user (hereinafter, referred to as a monitor or a monitoring person) of remote control device 50 determines an action of restarting driving after autonomous vehicle 1 makes an emergency stop by, and autonomous vehicle control device 10 determines the other actions of autonomous vehicle 1 autonomously in principle.”).
While Ueda discloses the data stream representative of the physical environment visualized on the display visible to the remote fleet operator comprises a first video feed from outside the vehicle (Ueda, paragraph [0174], “Furthermore, visible light cameras 21 may be placed in four places, i.e., a front part, a rear part, a left part and a right part of the vehicle. In this case, a front picture, a rear picture, a left picture, and a right picture shot by visible light cameras 21 are combined, and thereby a bird's eye picture/omnidirectional picture can be generated”), they do not teach “a second video feed from inside a cabin of the vehicle”.
However, Maestas teaches a second video feed (Maestas, Col. 7, Line 10-12, “The remote receiver 66, under control of the remote processor 62, is configured to receive the collected data transmitted by the cabin transmitter 42 described above”) from inside a cabin of the vehicle (Maestas, Col. 5, Line 24-29, “The cabin camera 28 and cabin microphone 32 are in data communication with the cabin processor 26 such that the cabin processor 26, when executing respective programming instructions, selectively causes the collection of video and audio data from inside the vehicle cabin”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to implement a video remote video feed that captures activity from inside the vehicle of Ueda, as taught by Maestas.
The suggestion/motivation for doing so would have been to record incidents that may happen within a car, such as theft or vandalism.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Ueda in view of Maestas to obtain the invention as specified in claim 3.
Claim 18 corresponds to claim 3, additionally reciting the system of claim 16 (Ueda, paragraph [0063], “FIG. 1 is a diagram showing an entire configuration of a remote self-driving system according to the first exemplary embodiment of the present disclosure”). Thus, claim 18 is rejected for the same reasons of obviousness as claim 3.
Claim(s) 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda (US 20190361436 A1) in view of Macros HIDO on Youtube (https://www.youtube.com/watch?v=bPe0_DHlj4A).
Regarding claim 5, Ueda discloses the method of claim 1.
Ueda does not teach “wherein the data stream representative of the physical environment comprises directional audio steered by the ego-object toward a direction: i) associated with a salient event detected in the physical environment by the ego-object, or ii) designated by the remote communication”.
However, Macros HIDO on Youtube teaches wherein the data stream representative of the physical environment comprises directional audio steered by the ego-object toward a direction: i) associated with a salient event detected in the physical environment by the ego-object, or ii) designated by the remote communication (Macros HIDO, 0:04, screenshot below, the audio in the video above plays when the car is closing in on a nearby car).
PNG
media_image2.png
706
1246
media_image2.png
Greyscale
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to implement a brake warning sound when Ueda’s vehicle are closing in on a nearby car, as taught by Macros HIDO.
The suggestion/motivation for doing so would have been to provide more alertness and better safety for the driver.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Ueda in view of Macros HIDO to obtain the invention as specified in claim 5.
Claim 12 corresponds to claim 5, additionally reciting the one or more processors of claim 11 (Ueda, paragraph [0074], “As the hardware resource, a processor, ROM (Read-Only Memory), RAM (Random-Access Memory), and other LSI (Large-Scale Integration) can be employed. As the processor, CPU (Central Processing Unit), GPU (Graphics Processing Unit), DSP (Digital Signal Processor), and the like, can be employed”). Thus, claim 12 is rejected for the same reasons of obviousness as claim 5.
Claim(s) 8 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda (US 20190361436 A1) in view of Wang (US 20170330034 A1).
Regarding claim 8, Ueda discloses the method of claim 1.
Ueda does not teach “wherein the visualization of the data stream representative of the physical environment comprises a three-dimensional augmented reality or virtual reality representation of the physical environment rendered on the remote device”.
However, Wang teaches wherein the visualization of the data stream representative of the physical environment comprises a three-dimensional augmented reality or virtual reality representation of the physical environment rendered on the remote device (Wang, paragraph [0016], "The augmented image is transmitted to the first autonomous vehicle, where the augmented image is to be displayed on a display device within the autonomous vehicle in a virtual reality manner").
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to implement a virtual reality display unit into Ueda’s invention that represents its’ captured environment in a virtual setting, as taught by Wang.
The suggestion/motivation for doing so would have been to simplify the live stream video footage of what the cameras are capturing, resulting in easier understanding of the vehicles’ surroundings.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Ueda in view of Wang to obtain the invention as specified in claim 8.
Claim(s) 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda (US 20190361436 A1) in view of Kim (US 20200005649 A1).
Regarding claim 9, Ueda discloses the method of claim 1.
Ueda does not teach “wherein the remote communication is a command summoning the ego-object or instructing the ego-object to self-park, and the data stream representative of the physical environment comprises a video feed generated by the ego-object while self- maneuvering in response to the remote communication”.
However, Kim teaches wherein the remote command is a command summoning the ego-object or instructing the ego-object to self-park, and the data stream representative of the physical environment comprises a video feed generated by the ego-object while self- maneuvering in response to the remote command (Kim, paragraph [0110], Fig. 3 below, "The PAPS may start automated parking maneuver in response to the selection of the slot. Driver/operator information may be “system is controlling obstacles along the path.” For example, the display inside the vehicle may graphically display a video which shows the vehicle, the parking lines and obstacles").
PNG
media_image3.png
358
466
media_image3.png
Greyscale
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to implement a self-parking software into Ueda’s vehicle and display its surroundings when parking, as taught by Kim.
The suggestion/motivation for doing so would have been to reduce stress of drivers who struggle with parking and allow those who are physically unable/weak to park easier.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to combine Ueda in view of Kim to obtain the invention as specified in claim 9.
Claim 19 corresponds to claim 9, additionally reciting the system of claim 16 (Ueda, paragraph [0063], “FIG. 1 is a diagram showing an entire configuration of a remote self-driving system according to the first exemplary embodiment of the present disclosure”). Thus, claim 19 is rejected for the same reasons of anticipation as claim 9.
Claim(s) 14 are rejected under 35 U.S.C. 103 as being unpatentable over Ueda (US 20190361436 A1) in view of Myhre (US 20190377181 A1).
Regarding claim 14, Ueda discloses the one or more processors of claim 11.
Ueda does not teach “wherein the display visible to the operator of the remote device is an augmented or virtual reality (AR/VR) headset, the one or more circuits further to cause the AR/VR headset to present a three-dimensional representation of the surround view visualizations of the physical environment”.
However, Myhre teaches wherein the display visible to the operator of the remote device is an augmented or virtual reality (AR/VR) headset, the one or more circuits further to cause the AR/VR headset to present a three-dimensional representation of the surround view visualizations of the physical environment (Myhre, paragraph [0005], “An electronic device such as a head-mounted device may have one or more near-eye displays that produce images for a user. The head-mounted device may be a pair of virtual reality glasses or may be an augmented reality headset that allows a viewer to view both computer-generated images and real-world objects in the viewer's surrounding environment”).
It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to use an augmented reality headset to view the surroundings of Ueda’s vehicle, as taught by Myhre.
The suggestion/motivation for doing so would have been to experience a drive in real time and gain a better understanding of a driving experience.
Further, one skilled in the art could have combined the elements as described above by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results.
Therefore, it would have been obvious to Ueda in view of Myhre to obtain the invention as specified in claim 14.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WAYNE ZHANG whose telephone number is (571) 272-0245. The examiner can normally be reached Monday-Friday 10:00-6:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WAYNE ZHANG/Examiner, Art Unit 2672
/SUMATI LEFKOWITZ/Supervisory Patent Examiner, Art Unit 2672