DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on December 19, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Status of the Claims
This Office Action is in response to the claims filed on December 19, 2024.
Claims 1-11 have been presented for examination.
Claims 1-11 are currently rejected.
Claims 1-2, 5, and 7-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cherney et al. (U.S. Patent Publication Number 2019/0198015).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cherney et al. (U.S. Patent Publication Number 2019/0198015) in view of Liu et al. (U.S. Patent Publication Number 2021/0151050).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Cherney et al. (U.S. Patent Publication Number 2019/0198015) in view of Liu et al. (U.S. Patent Publication Number 2021/0151050), further in view of Su et al. (U.S. Patent Publication Number 2024/0046931).
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Cherney et al. (U.S. Patent Publication Number 2019/0198015) in view of Su et al. (U.S. Patent Publication Number 2024/0046931).
Claim Interpretation
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“an environment information acquiring part” in at least claims 1
“a verbalization part” in at least claims 1, 9, and 10
“a command acquiring part” in at least claims 1, 9, and 10
“a control part” in at least claims 1, 9, and 10
“a selection part” in at least claim 5
“a classifying part” in at least claim 7
“an indication part” in at least claim 8
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
Structure for “an environment information acquiring part,” which may be, for example, the image capturing device 40, which may be a monocular camera, is provided in page 25 lines 28-29
Structure for “a verbalization part” is provided in at least Fig. 6 depicting the verbalization part to be part of controller 30
Structure for “a command acquiring part” is provided in at least Fig. 6 depicting the command acquiring part to be part of controller 30
Structure for “a control part” is provided in at least Fig. 6 depicting the control part to be part of controller 30
Structure for “a selection part” is provided in at least Fig. 13 depicting the selection part to be part of controller 30
Structure for “a classifying part” is provided in at least Fig. 15 depicting the classifier to be part of controller 30
Structure for “an indication part” is provided in at least Fig. 17 depicting the verbalization part to be part of controller 30
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The terms “relatively large” and “relatively small” in claims 5 and 7 are relative terms which render the claims indefinite. The terms “relatively large” and “relatively small” are not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Specifically, neither the claims nor the instant specification provide parameters for quantifying a size of a model and does not define ranges for measuring a model to be “relatively large” or “relatively small.”
Similarly, the term “rarity of a word” in claim 6 is a relative term which renders the claim indefinite. The term “rarity” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Specifically, the degree of “rarity” of a word may vary and neither the claims nor the instant specification provides a metric for measuring a word to be “rare.” Claim 6 further inherits the deficiencies of claim 5 from which it depends.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 5, and 7-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cherney et al. (U.S. Patent Publication Number 2019/0198015).
Regarding claim 1, Cherney discloses a work machine (Cherney Fig. 2 mobile construction machine 102) comprising:
an environment information acquiring part (Cherney Fig. 2 sensors 138) configured to acquire information about an environment surrounding the work machine; (Cherney ¶ 29 discloses “one or more sensors 138” including “sensors that sense environmental characteristic,” also see Fig. 2)
a verbalization part (Cherney ¶ 44 “Speech processing system 120”) configured to verbalize the information acquired by the environment information acquiring part in natural language; (Cherney ¶ 44 discloses “If sensor 138 is a geographic position sensor, and it senses that machine 102 is approaching that boundary, or has crossed the boundary, it again may trigger speech processing system 120 to play a synthesized message [i.e., verbalize the information] to operator 116 indicating that machine 102 is approaching, or has crossed, the geographic boundary [i.e., information acquired by the environment information],” wherein the speech processing system “provides that to natural language understanding logic 178 which can generate a natural language understanding result,” see ¶ 32)
a command acquiring part (Cherney Fig. 1 speech processing system 126) configured to acquire a command from an operator of the work machine in natural language; and (Cherney ¶ 18 discloses that “operator 116 provides a voice command through a microphone in interface 114, machine 102 may send information representative of the received voice command to remote server computing system 108 which performs speech recognition and natural understanding on the voice input” by the speech processing system 126)
a control part (Cherney in at least Fig. 2 control system 102) configured to control movement of the work machine based on interpretation of the command acquired by the command acquiring part and the information verbalized by the verbalization part, (Cherney ¶ 7 discloses detecting a speech processing trigger and performing speech processing “such as speech recognition and natural language understanding),” such that “Control logic 181 can thus implement a control algorithm that is used to control machine 102 (or parts of machine 102) or other parts of the architecture 100 shown in FIG. 1, based upon a speech input by operator 116 or from another operator,” see ¶ 32, wherein the speech input generates “a natural language understanding result indicative of a semantic meaning [i.e., interpretation] of the speech input,” see ¶ 32.)
the interpretation being given by a predetermined language model. (Cherney ¶ 32 discloses providing speech input to “natural language understanding logic 178 [i.e., a predetermined language model] which can generate a natural language understanding result indicative of a semantic meaning of the speech input.” Also see at least ¶ 35 “Speech recognition logic and natural language understanding logic 176 and 178, respectively, generate outputs that can be provided to control logic 181.” Also see ¶ 37.)
Regarding claim 2, Cherney discloses the work machine according to claim 1, wherein:
the verbalization part is further configured to verbalize information of a working drawing in natural language, in addition to the information acquired by the environment information acquiring part. (Cherney ¶ 44 discloses “sensor signal may trigger speech processing system 120 to play a synthesized message for operator 116 alerting operator 116 his or her machine's proximity to the other machine or object.” For example, if the machine 102 “is only to operate within a certain geographic boundary” and the sensor 138 “senses that machine 102 is approaching that boundary, or has crossed the boundary [i.e., information of a working drawing], it again may trigger speech processing system 120 to play a synthesized message to operator 116.” Also see ¶ 54. One having ordinary skill in the art would recognize that the area within the geographic boundary constitutes a working drawing in accordance with Page 36 Lines 5-6 of the instant specification describing a “working drawing” to show a target object that the work machine is to work on.)
Regarding claim 5, Cherney discloses the work machine according to claim 1, further comprising:
a selection part configured to select(Cherney Fig. 2 speech processing trigger detector 168 “detects one or more [i.e., selecting] triggers that indicate that speech processing is to be performed”) the predetermined language model by selecting a first language model or a second language model, based on details of the command acquired by the command acquiring part, (Cherney ¶ 32 discloses “Speech recognition logic 176 illustratively performs speech recognition on a speech input received by control system 152,” wherein the “operator 116 provides a voice command through a microphone in interface 114, machine 102 may send information representative of the received voice command to remote server computing system 108 [i.e., selecting a predetermined language logic, i.e., model] which performs speech recognition,” see ¶ 18. One having ordinary skill in the art would recognize that sending information to be processed in speech processing system 126 of remote server computing system 108 instead of speech processing system 123 of remote user computing system 106 involves selecting one predetermined language model over another.)
wherein the first language model is relatively large and provided outside the work machine such that the first language model and the work machine are able to communicate with each other, and (Cherney Fig. 1 depicts that the speech processing system 126 is remote from [i.e., outside of] the mobile construction machine 102 and is accessed “whenever speech recognition (or another speech service) needs to be performed,” see ¶ 40)
the second language model is relatively small and incorporated in the work machine, and (Cherney Fig. 2 depicts on-board speech processing system 170, also see ¶ 46)
wherein the control part is further configured to control the movement of the work machine based on interpretation, given by the predetermined language model selected by the selection part, of the command acquired by the command acquiring part and the information verbalized by the verbalization part. (Cherney ¶ 7 discloses detecting a speech processing trigger and performing speech processing “such as speech recognition and natural language understanding),” such that “Control logic 181 can thus implement a control algorithm that is used to control machine 102 (or parts of machine 102) or other parts of the architecture 100 shown in FIG. 1, based upon a speech input by operator 116 or from another operator,” see ¶ 32)
Regarding claim 7, Cherney discloses the work machine according to claim 1, further comprising:
a classifying part (Cherney ¶ 31 “Speech processing trigger detector 168”) configured to classify the command acquired by the command acquiring part (Cherney ¶ 31 “Speech processing trigger detector 168 illustratively detects one or more triggers that indicate that speech processing is to be performed. For instance, trigger detector 168 may detect a voice command input by operator 116”) into: a first command for controlling the work machine to perform an urgent movement; a second command for controlling the work machine based on interpretation of the command given by a first language model; or a third command for controlling the work machine based on interpretation of the command given by a second language model, (Cherney ¶ 7 discloses detecting a speech processing trigger and performing speech processing “such as speech recognition and natural language understanding),” such that “Control logic 181 can thus implement a control algorithm that is used to control machine 102 (or parts of machine 102) or other parts of the architecture 100 shown in FIG. 1, based upon a speech input by operator 116 or from another operator,” see ¶ 32, wherein the speech input generates “a natural language understanding result indicative of a semantic meaning [i.e., interpretation] of the speech input,” see ¶ 32.)
wherein the first language model is relatively large and provided outside the work machine such that the first language model and the work machine are able to communicate with each other, and (Cherney Fig. 1 depicts that the speech processing system 126 is remote from [i.e., outside of] the mobile construction machine 102 and is accessed “whenever speech recognition (or another speech service) needs to be performed,” see ¶ 40)
the second language model is relatively small and incorporated in the work machine, and (Cherney Fig. 2 depicts on-board speech processing system 170, also see ¶ 46)
wherein the control part is further configured to control the movement of the work machine based on the command acquired by the command acquiring part based on a result of classification by the classifying part. (Cherney ¶ 7 discloses detecting a speech processing trigger and performing speech processing “such as speech recognition and natural language understanding),” such that “Control logic 181 can thus implement a control algorithm that is used to control machine 102 (or parts of machine 102) or other parts of the architecture 100 shown in FIG. 1, based upon a speech input by operator 116 or from another operator,” see ¶ 32)
Regarding claim 8, Cherney discloses the work machine according to claim 1, further comprising:
an indication part configured to send an indication in advance, to the operator of the work machine, about the movement of the work machine under control of the control part, when the command is acquired by the command acquiring part, (Cherney ¶ 105 discloses that “the control logic is configured to control the speech synthesis logic to generate, as the speech synthesis signal, a warning message” by using sensor input, which includes a voice command input by the operator, see ¶ 31, to “generate an audible, verbal warning or alert for operator 116.” One having ordinary skill in the art would recognize that a warning is an indication sent in advance.)
wherein, when permission is given by the operator of the work machine upon the sending of the indication, the control part controls the movement of the work machine based on the interpretation, given by the predetermined language model, of the command acquired by the command acquiring part and the information verbalized by the verbalization part. (Cherney ¶ 7 discloses detecting a speech processing trigger and performing speech processing “such as speech recognition and natural language understanding),” such that “Control logic 181 can thus implement a control algorithm that is used to control machine 102 (or parts of machine 102) or other parts of the architecture 100 shown in FIG. 1, based upon a speech input by operator 116 or from another operator,” see ¶ 32)
Regarding claim 9, Cherney discloses the operation assisting system comprising:
an environment information acquiring part (Cherney Fig. 2 sensors 138) configured to acquire information about an environment surrounding the work machine; (Cherney ¶ 29 discloses “one or more sensors 138” including “sensors that sense environmental characteristic,” also see Fig. 2)
a verbalization part (Cherney ¶ 44 “Speech processing system 120”) configured to verbalize the information acquired by the environment information acquiring part in natural language; (Cherney ¶ 44 discloses “If sensor 138 is a geographic position sensor, and it senses that machine 102 is approaching that boundary, or has crossed the boundary, it again may trigger speech processing system 120 to play a synthesized message [i.e., verbalize the information] to operator 116 indicating that machine 102 is approaching, or has crossed, the geographic boundary [i.e., information acquired by the environment information]”)
a command acquiring part (Cherney Fig. 1 speech processing system 126) configured to acquire a command from an operator of the work machine in natural language; and (Cherney ¶ 18 discloses that “operator 116 provides a voice command through a microphone in interface 114, machine 102 may send information representative of the received voice command to remote server computing system 108 which performs speech recognition and natural understanding on the voice input” by the speech processing system 126)
a control part (Cherney in at least Fig. 2 control system 102) configured to control movement of the work machine based on interpretation of the command acquired by the command acquiring part and the information verbalized by the verbalization part, by a predetermined language model. (Cherney ¶ 7 discloses detecting a speech processing trigger and performing speech processing “such as speech recognition and natural language understanding),” such that “Control logic 181 can thus implement a control algorithm that is used to control machine 102 (or parts of machine 102) or other parts of the architecture 100 shown in FIG. 1, based upon a speech input by operator 116 or from another operator,” see ¶ 32, wherein the speech input is provided to “natural language understanding logic 178 [i.e., a predetermined language model] which can generate a natural language understanding result indicative of a semantic meaning [i.e., interpretation] of the speech input,” see ¶ 32.)
Regarding claim 10, Cherney discloses the information processing device comprising:
a verbalization part (Cherney Fig. 1 speech processing system 126) configured to verbalize information about an environment surrounding a work machine in natural language; (Cherney ¶ 18 discloses that “operator 116 provides a voice command through a microphone in interface 114, machine 102 may send information representative of the received voice command to remote server computing system 108 which performs speech recognition and natural understanding on the voice input” by the speech processing system 126)
a command acquiring part (Cherney Fig. 1 speech processing system 126) configured to acquire a command from an operator of the work machine in natural language; and (Cherney ¶ 18 discloses that “operator 116 provides a voice command through a microphone in interface 114, machine 102 may send information representative of the received voice command to remote server computing system 108 which performs speech recognition and natural understanding on the voice input” by the speech processing system 126)
a control part (Cherney in at least Fig. 2 control system 102) configured to control movement of the work machine based on interpretation of the command acquired by the command acquiring part and the information verbalized by the verbalization part, (Cherney ¶ 7 discloses detecting a speech processing trigger and performing speech processing “such as speech recognition and natural language understanding),” such that “Control logic 181 can thus implement a control algorithm that is used to control machine 102 (or parts of machine 102) or other parts of the architecture 100 shown in FIG. 1, based upon a speech input by operator 116 or from another operator,” see ¶ 32)
the interpretation being given by a predetermined language model. (Cherney ¶ 32 discloses providing speech input to “natural language understanding logic 178 [i.e., a predetermined language model] which can generate a natural language understanding result indicative of a semantic meaning of the speech input.” Also see at least ¶ 35 “Speech recognition logic and natural language understanding logic 176 and 178, respectively, generate outputs that can be provided to control logic 181.” Also see ¶ 37.)
Regarding claim 11, Cherney discloses the non-transitory computer-readable recording medium storing instructions that, when executed by a computer, cause the computer to operate as the work machine of claim 1. (Cherney ¶ 79)
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Cherney et al. (U.S. Patent Publication Number 2019/0198015) in view of Liu et al. (U.S. Patent Publication Number 2021/0151050).
Regarding claim 3, Cherney does not expressly disclose the work machine according to claim 1, further comprising:
a determining part configured to determine whether an urgency of the command acquired by the command acquiring part is high or low,
wherein the control part is further configured to: when the determining part determines that the command is one of a high urgency, control the work machine to make a predetermined movement that conforms to the command; and
when the determining part determines that the command is one of a low urgency, control the movement of the work machine based on the interpretation, given by the predetermined language model, of the command acquired by the command acquiring part and the information verbalized by the verbalization part.
However, Liu discloses:
a determining part configured to determine whether an urgency of the command acquired by the command acquiring part is high or low, (Liu ¶ 52 discloses that processing unit 32 has a “dynamic priority order that determines the category of the input used to process the machine basic control commands” and provides for adjustments to importance and communication prioritization, wherein the input includes “input of a voice command of a human being,” see ¶ 51)
wherein the control part is further configured to: when the determining part determines that the command is one of a high urgency, control the work machine to make a predetermined movement that conforms to the command; and (Liu ¶ 52 discloses that “Units with a higher priority order will process the input of the machine basic control command or the machine motion control command earlier than the units with a lower priority order,” wherein the machine basic control command and the machine motion control command includes a motion path instruction and a steering angle, see ¶ 47)
when the determining part determines that the command is one of a low urgency, control the movement of the work machine based on the interpretation, given by the predetermined language model, of the command acquired by the command acquiring part and the information verbalized by the verbalization part. (Liu ¶ 53 discloses that when a “control command operation cannot be executed by the program of the second operation, then, its priority will be lowered [i.e., determining that a command is of low urgency],” and that the “processing unit 32 will continue to take other operations,” including “executing a conditional request operation (conditional request) of one or more machining tasks of a specific industrial machinery 4” based on receiving an input of a voice command 11, see ¶ 36. Also see Fig. 1.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the commands of Cherney with determining whether an urgency of the command acquired by the command acquiring part is high or low, as disclosed by Liu, with reasonable expectation of success, to simulate human beings for specific scenario inputs of the machine basic control command or the machine motion control command to try to optimize the propensity of the first operation (Liu ¶ 53), rendering the limitation to be an obvious modification.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Cherney et al. (U.S. Patent Publication Number 2019/0198015) in view of Liu et al. (U.S. Patent Publication Number 2021/0151050), further in view of Su et al. (U.S. Patent Publication Number 2024/0046931).
Regarding claim 4, Cherney in combination with Liu does not expressly disclose the work machine according to claim 3, wherein:
the determining part is further configured to determine that the command is one of the high urgency when a predetermined word that indicates the high urgency is included in text of the command acquired by the command acquiring part.
the determining part is further configured to determine that the command is one of the high urgency when a predetermined word that indicates the high urgency is included in text of the command acquired by the command acquiring part. (Su ¶ 626 discloses “the user may end current multi-round voice interaction by using a high-priority voice instruction. In a possible example, for example, the first apparatus may record some high-priority instructions,” wherein the voice instruction may include a “hot word”)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the commands of Cherney, of the combination of Cherney and Liu, with determining that the command is one of the high urgency when a predetermined word that indicates the high urgency is included in text of the command acquired by the command acquiring part, as disclosed by Su, with reasonable expectation of success, to optimize applicability of an operation of responding to the voice instruction of the user (Su ¶ 14) and to help balance accuracy and efficiency of voice recognition (Su ¶ 18), rendering the limitation to be an obvious modification.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Cherney et al. (U.S. Patent Publication Number 2019/0198015) in view of Su et al. (U.S. Patent Publication Number 2024/0046931).
Regarding claim 6, Cherney does not expressly disclose the work machine according to claim 5, wherein:
the selection part is further configured to select the predetermined language model between the first language model and the second language model based on at least one of a length of the command acquired by the command acquiring part or a rarity of a word included in the command.
However, Su discloses:
the selection part is further configured to select the predetermined language model between the first language model and the second language model based on at least one of a length of the command acquired by the command acquiring part or a rarity of a word included in the command. (Su ¶ 626 discloses “the user may end current multi-round voice interaction by using a high-priority voice instruction. In a possible example, for example, the first apparatus may record some high-priority instructions,” wherein the voice instruction may include a “hot word.” One having ordinary skill in the art would recognize that a hot word is a unique word with a “rarity” to be recognized as a keyword to implement the instruction.)
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have combined the commands of Cherney, of the combination of Cherney and Liu, with selecting a predetermined language model based on a rarity of a word included in the command, as disclosed by Su, with reasonable expectation of success, to optimize applicability of an operation of responding to the voice instruction of the user (Su ¶ 14) and to shorten duration within which the first apparatus responds to a user instruction (Su ¶ 14), rendering the limitation to be an obvious modification.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kim et al. (U.S. Patent Publication Number 2023/0206922) discloses a dialogue system provided in a vehicle including: a speech recognizer module configured to convert a speech of a user into a plurality of candidate texts, and prioritize the plurality of candidate texts; a understanding module configured to determine a first action corresponding to a first candidate text with a highest priority among the plurality of candidate texts; and a controller configured to attempt to perform the determined first action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHANIE T SU whose telephone number is (571)272-5326. The examiner can normally be reached Monday to Friday, 9:30AM - 5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANISS CHAD can be reached at (571)270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STEPHANIE T SU/Patent Examiner, Art Unit 3662