DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-5, 7-10, 12-18 and 20-23 have been examined.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/27/26 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1, 13, 14 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-5, 7-10, 12-18 and 20-23 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 10,721,195 and claims 1-26 of U.S. Patent No. 11,595,336. Although the claims at issue are not identical, they are not patentably distinct from each other because present application and related patents all disclose system/method for detection of email compromise by identifying a first party as trusted sender based on specified criteria; determine if a third party is same or similar to the first party; and perform security and/or reporting steps based on specific determinations. Although the criteria for determining trusted sender and security steps taken are slightly different, it is well known in the art to apply different identification parameters and security measures in response to detection of email compromise. Please see comparison of exemplary claims with emphases added on the minor distinctions.
Instant Application
U.S. Patent No. 10,721,195
1. A system for detection of email risk, comprising:
a processor configured to:
automatically determine that a first party is considered by the system to be trusted by a second party, based on at least one of determining that the first party is on a whitelist and that the first party is in an address book associated with the second party;
receive a message addressed to the second party from a third party;
determine that the message poses a risk in response to determining that a display name of the first party matches a display name of the third party, but an email address of the third party and an email address of the first party are different;
responsive to determining that the message poses a risk, automatically perform a security action comprising at least one of marking the message up with a warning and quarantining the message; and
responsive to determining that the message comprises a hyperlink, cause a proxying of loading of content associated with the hyperlink so that a request from the second party for the content associated with the hyperlink is received by a proxy;
and a memory coupled to the processor and configured to provide the processor with instructions.
1. A system for detection of business email compromise, comprising:
a processor configured to:
automatically determine that a first party is trusted by a second party, based on at least one of determining that the first party and second party belong to the same organization and that at least a threshold number of messages have been transmitted between the second party and the first party during a period of time that exceeds a threshold time;
receive a message addressed to the second party from a third party, the third party distinct from the first party;
perform a risk determination of the received message to determine if the received message poses a risk by determining that a display name of the first party and a display name of third party are the same or that a domain name of the first party and a domain name of the third party are similar, wherein similarity is determined based on having a string distance below a first threshold, or being conceptually similar based on a list of conceptually similar character strings;
responsive to the first party being trusted by the second party, and the received message is determined to pose a risk, automatically perform a security action and a report generation action without having received any user input from a user associated with the second party in response to the message, wherein the security action comprises marking the message up with a warning or quarantining the message, wherein the report generating action comprises including information about the received message in a report accessible to an admin of the system; and
a memory coupled to the processor and configured to provide the processor with instructions.
13. A system for determining whether an electronic message is deceptive, comprising:
a processor configured to:
automatically determine whether a first party is considered trusted by a second party, based on at least on one of determining that the first party is on a whitelist and that the first party is in an address book associated with the second party;
receive a message addressed to the second party from a third party;
determine if the received message poses a risk by determining that a display name of the first party and a display name of third party are the same, but an email address of the third party and an email address of the first party are different;
responsive to the first party is considered trusted by the second party, and the received message is determined to pose a risk, determine that the message is deceptive;
responsive to a determination that the first party is not considered trusted by the second party, determine that the message is not deceptive;
responsive to the message being found deceptive, automatically perform a security action comprising at least one of marking the message up with a warning or quarantining the message; and
responsive to determining that the message comprises a hyperlink, cause a proxying of loading of content associated with the hyperlink so that a request from the second party for the content associated with the hyperlink is received by a proxy;
responsive to the message being found not deceptive and not comprising a hyperlink, deliver the message to the second party;
and a memory coupled to the processor and configured to provide the processor with instructions.
10. A non-monotonic system for determining whether an electronic message is deceptive, comprising:
a processor configured to:
automatically determine whether a first party is trusted by a second party, based on at least one of determining that the first party and second party belong to the same organization and that at least a threshold number of messages have been transmitted between the second party and the first party during a period of time that exceeds a threshold time;
receive a message addressed to the second party from a third party, the third party distinct from the first party;
perform a risk determination of the received message to determine if the received message poses a risk by determining that a display name of the first party and a display name of third party are the same or that a domain name of the first party and a domain name of the third party are similar, wherein similarity is determined based on having a string distance below a first threshold, or being conceptually similar based on a list of conceptually similar character strings;
responsive to the first party being trusted by the second party, and the received message is determined to pose a risk, determine that the message is deceptive;
responsive to a determination that the first party is not trusted by the second party, determine that the message is not deceptive;
responsive to the message being found deceptive, automatically perform a security action and a report generation action without having received any user input from a user associated with the second party in response to the message, wherein the security action comprises marking the message up with a warning or quarantining the message, wherein the report generating action comprises including information about the received message in a report accessible to an admin of the system; and
responsive to the message being found not deceptive, deliver the message to the second party; and
a memory coupled to the processor and configured to provide the processor with instructions.
Instant Application
U.S. Patent No. 11,595,336
1. A system for detection of email risk, comprising:
a processor configured to:
automatically determine that a first party is considered by the system to be trusted by a second party, based on at least one of determining that the first party is on a whitelist and that the first party is in an address book associated with the second party;
receive a message addressed to the second party from a third party;
determine that the message poses a risk in response to determining that a display name of the first party matches a display name of the third party, but an email address of the third party and an email address of the first party are different;
responsive to determining that the message poses a risk, automatically perform a security action comprising at least one of marking the message up with a warning and quarantining the message; and
responsive to determining that the message comprises a hyperlink, cause a proxying of loading of content associated with the hyperlink so that a request from the second party for the content associated with the hyperlink is received by a proxy;
and a memory coupled to the processor and configured to provide the processor with instructions.
1. A system for detection of email risk, comprising:
a processor configured to:
automatically determine that a first party is considered by the system to be trusted by a second party, based on at least one of determining that the first party is on a whitelist and that the first party is in an address book associated with the second party;
receive a message addressed to the second party from a third party, the third party distinct from the first party;
perform a risk determination of the message by determining whether the message comprises a hyperlink and by determining whether a display name of the first party and a display name of third party are the same or that a domain name of the first party and a domain name of the third party are similar, wherein similarity is determined based on having a string distance below a first threshold or being conceptually similar based on a list of conceptually similar character strings;
responsive to the first party being trusted by the second party, and that the message is determined to pose a risk, automatically perform a security action and a report generation action without having received any user input from a user associated the second party in response to the message, wherein the security action comprises replacing the hyperlink in the message with a proxy hyperlink, wherein the report generating action comprises including information about the received message in a report accessible to an admin of the system; and
a memory coupled to the processor and configured to provide the processor with instructions.
13. A system for determining whether an electronic message is deceptive, comprising:
a processor configured to:
automatically determine whether a first party is considered trusted by a second party, based on at least on one of determining that the first party is on a whitelist and that the first party is in an address book associated with the second party;
receive a message addressed to the second party from a third party;
determine if the received message poses a risk by determining that a display name of the first party and a display name of third party are the same, but an email address of the third party and an email address of the first party are different;
responsive to the first party is considered trusted by the second party, and the received message is determined to pose a risk, determine that the message is deceptive;
responsive to a determination that the first party is not considered trusted by the second party, determine that the message is not deceptive;
responsive to the message being found deceptive, automatically perform a security action comprising at least one of marking the message up with a warning or quarantining the message; and
responsive to determining that the message comprises a hyperlink, cause a proxying of loading of content associated with the hyperlink so that a request from the second party for the content associated with the hyperlink is received by a proxy;
responsive to the message being found not deceptive and not comprising a hyperlink, deliver the message to the second party;
and a memory coupled to the processor and configured to provide the processor with instructions.
13. A system for determining whether an electronic message is deceptive, comprising:
a processor configured to:
automatically determine whether a first party is considered trusted by a second party, based on at least on one of determining that the first party is on a whitelist and that the first party is in an address book associated with the second party;
receive a message addressed to the second party from a third party, the third party distinct from the first party;
determine if the received message poses a risk by determining whether the message comprises a hyperlink and by determining that a display name of the first party and a display name of third party are the same or that a domain name of the first party and a domain name of the third party are similar, wherein similarity is determined based on having a string distance below a first threshold or being conceptually similar based on a list of conceptually similar character strings;
responsive to the first party is considered trusted by the second party, and the received message is determined to pose a risk, determine that the message is deceptive;
responsive to a determination that the first party is not considered trusted by the second party, determine that the message is not deceptive;
responsive to the message being found deceptive, automatically perform a security action and a report generation action without having received any user input from a user associated with the second party in response to the message, wherein the security action comprises replacing the hyperlink in the message with a proxy hyperlink, wherein the report generating action comprises including information about the received message in a report accessible to an admin of the system; and
responsive to the message being found not deceptive, deliver the message to the second party; and
a memory coupled to the processor and configured to provide the processor with instructions.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 7, 10, 13-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Osipkov U.S. 2012/0246725 (hereinafter Osipkov) in view of Dreller et al. U.S. 2014/0082726 (hereinafter Dreller) and further in view of Starink U.S. 2015/0381653 (hereinafter Starink).
As per claim 1 and 14, Osipkov discloses a system/method for detection of email risk, comprising:
a processor configured to:
automatically determine that a first party is considered by the system to be trusted by a second party, based on at least one of determining that the first party is on a whitelist and that the first party is in an address book associated with the second party (Osipkov: [0022]: automated process to determine whether the message is desired or unwanted, identify properties of the message including name of the sender; [0024]: message sent from verified sender who are identified in an address book of the user);
receive a message addressed to the second party from a third party (Osipkov: [0019]: receive message addressed to the user);
determine that the message poses a risk in response to determining that a display name of the first party matches a display name of the third party (Osipkov: [0022]-[0024]: determine if name of the sender is in the address book);
responsive to determining that the message poses a risk, automatically perform a security action comprising at least one of marking the message up with a warning and quarantining the message (Osipkov: [0007]: exclude the message; [0021]: quarantining the message).
and a memory coupled to the processor and configured to provide the processor with instructions.
Osipkov discloses determining name of the sender to determine whether sender is verified sender (Osipkov: [0024]). Osipkov does not explicitly disclose determining display name of sender is same as trusted sender, but an email address of the third party and an email address of the first party are different. However, Dreller discloses identifying phishing email that are purporting to be from trusted domain but are actually from another domain not owned by the legitimate domain (Dreller: [0051]-[0052]: search display name and determine non-domain phishing to identify legitimate display name with phishing address). It would have been obvious to one having ordinary skill in the art to identify suspicious email with legitimate display name but suspicious domain address because Osipkov and Dreller are analogous art. The motivation to combine would be to filter out seemingly legitimate email based on additional analysis of metadata.
Osipkov discloses identifying link contained in email communication (Osikpov: [0021]: evaluate content of messages in order to differentiate unwanted messaged from desirable messages… examine content associated with a message, such as attached files and hyperlinks). Osipkov does not explicitly disclose responsive to determining that the message comprises a hyperlink, cause a proxying of loading of content associated with the hyperlink so that a request from the second party for the content associated with the hyperlink is received by a proxy (Starink: [0031]-[0032]; [0047]: replace the URL with an alternate link to a trusted resource). It would have been obvious to one having ordinary skill in the art to replace suspicious link contained in email with proxy link to trusted resource because they are analogous art. The motivation would be to protect user from accessing malicious content.
As per claim 2 and 15, Osipkov as modified discloses the limitations of claims 1 and 14 respectively. Osipkov as modified further discloses wherein a request associated with the hyperlink causes the system to: determine whether a site associated with the hyperlink is associated with risk; and based on the determination whether the site associated with the hyperlink is associated with risk, cause a warning to be displayed or redirection to be made (Osipkov: [0021]; Starink: [0031]-[0032]). Same rationale applies here as above in rejecting claims 1 and 14.
As per claim 3 and 16, Osipkov as modified discloses the limitations of claims 2 and 15 respectively. Osipkov as modified further discloses determining whether the site associated with the hyperlink is associated with risk before the request associated with the hyperlink is received (Osipkov: [0021]: examine content before user accesses the link; Starink: [0031]-[0032]; [0047]: once a link has been found to be associated with a potentially malicious resource, the modifier module may be executed to replace the link with an alternate link/proxy link). Same rationale applies here as above in rejecting claim 1.
As per claim 4 and 17, Osipkov as modified discloses the limitations of claims 2 and 15 respectively. Osipkov as modified further discloses determining whether the site associated the hyperlink is associated with risk in response to receiving the request associated with the hyperlink (Starink: [0060]; [0077]). Same rationale applies here as above in rejecting claim 1.
As per claim 5 and 18, Osipkov as modified discloses the limitations of claims 1 and 14 respectively. Osipkov as modified further discloses in response to receiving a request associated with the hyperlink, and based on a result of the verification, causing a warning to be displayed or a redirection to be made (Starink: [0086]-[0094]). It would have been obvious to one having ordinary skill in the art to redirect user to intermediary node for additional security analysis because the references are analogous art involving detection of malicious email communications. The motivation to combine would be to track and monitor communication behaviors associated with unknown resources prior to determining whether it’s malicious or safe.
As per claim 7, Osipkov as modified discloses the system of claim 1. Osipkov as modified further discloses wherein the security action comprises at least one of: initiating a multi-factor authentication verification, modifying the display name of the message, transmitting a notification or a warning to an address associated with the second party, collecting information comprising at least one of an IP address, a cookie, and browser version information, and transmitting a confirmation request to an address associated with the first party, the confirmation request comprising at least a portion of the message (Osipkov: [0021]-[0022]; Dreller: [0054]; Starink: [0057]). It would have been obvious to one having ordinary skill in the art to take various security measures in response to detection of suspicious/malicious communication as well known in the art.
As per claim 10, Osipkov as modified discloses the system of claim 1. Vitaldevara as modified further discloses wherein the risk determination is further based at least in part on at least one of: an indication of spoofing, an indication of account takeover, a presence of a reply-to address, a geographic inconsistency, detection of a new signature file, detection of a new display name, detection of high-risk email content, detection of an abnormal delivery path, and based on analysis of attachments (Vitaldevara: [0016]-[0018]).
As per claim 13 and 20, Osipkov discloses a system/method for determining whether an electronic message is deceptive, comprising: 13. A system for determining whether an electronic message is deceptive, comprising:
a processor configured to:
automatically determine whether a first party is considered trusted by a second party, based on at least on one of determining that the first party is on a whitelist and that the first party is in an address book associated with the second party (Osipkov: [0022]: automated process to determine whether the message is desired or unwanted, identify properties of the message including name of the sender; [0024]: message sent from verified sender who are identified in an address book of the user);
receive a message addressed to the second party from a third party (Osipkov: [0019]: receive message addressed to the user);
determine if the received message poses a risk by determining that a display name of the first party and a display name of third party are the same (Osipkov: [0022]-[0024]: determine if name of the sender is in the address book);
responsive to a determination that the first party is not considered trusted by the second party, determine that the message is not deceptive (Osipkov: [0021]);
responsive to the message being found deceptive, automatically perform a security action comprising at least one of marking the message up with a warning or quarantining the message (Osipkov: [0007]: exclude the message; [0021]: quarantining the message); and
responsive to the message being found not deceptive and not comprising a hyperlink, deliver the message to the second party (Osipkov: [0024]: deliver message to user by placing in the messages in “trusted mail” folder).
and a memory coupled to the processor and configured to provide the processor with instructions.
Osipkov discloses determining name of the sender to determine whether sender is verified sender (Osipkov: [0024]). Osipkov does not explicitly disclose determining display name of sender is same as trusted sender, but an email address of the third party and an email address of the first party are different; responsive to the first party is considered trusted by the second party, and the received message is determined to pose a risk, determine that the message is deceptive. However, Dreller discloses identifying phishing email that are purporting to be from trusted domain but are actually from another domain not owned by the legitimate domain (Dreller: [0051]-[0052]: search display name and determine non-domain phishing to identify legitimate display name with phishing address). It would have been obvious to one having ordinary skill in the art to identify suspicious email with legitimate display name but suspicious domain address because Osipkov and Dreller are analogous art. The motivation to combine would be to filter out seemingly legitimate email based on additional analysis of metadata.
Osipkov discloses identifying link contained in email communication (Osikpov: [0021]: evaluate content of messages in order to differentiate unwanted messaged from desirable messages… examine content associated with a message, such as attached files and hyperlinks). Osipkov does not explicitly disclose responsive to determining that the message comprises a hyperlink, cause a proxying of loading of content associated with the hyperlink so that a request from the second party for the content associated with the hyperlink is received by a proxy (Starink: [0031]-[0032]; [0047]: replace the URL with an alternate link to a trusted resource). It would have been obvious to one having ordinary skill in the art to replace suspicious link contained in email with proxy link to trusted resource because they are analogous art. The motivation would be to protect user from accessing malicious content.
Claims 8, 9 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Osipkov in view of Dreller and further in view of Starink and further in view of Gupta et al. U.S. 2017/0206545 (hereinafter Gupta).
As per claim 8, Osipkov as modified discloses the system of claim 7. Osipkov as modified does not explicitly disclose wherein a confirmation received in response to the confirmation request comprises at least one of an entered code or a clicked link, wherein the link is included in the confirmation request. However, Gupta discloses sending confirmation request to sender to confirm that it’s from a valid e-mail address (Gupta: [0079]; [0145]). It would have been obvious to one having ordinary skill in the art to request sender to confirm validity of the e-mail because they are analogous art involving e-mail communication system where legitimacy of sender is verified. The motivation to combine would be to ensure communication from sender is from trusted party instead of from auto-generated spam message systems.
As per claim 9, Osipkov as modified discloses the system of claim 8. Osipkov as modified further discloses wherein information associated with the clicked link is collected, wherein the information comprises at least one of the IP address, the cookie, and the browser version information (Gupta: [0145]: sender verification sends activation link to sender’s email). Same rationale applies here as above in rejecting claim 8.
As per claim 12, Osipkov as modified discloses the system of claim 1. Osipkov as modified does not explicitly disclose wherein the security action further comprises transmitting a confirmation request to an address associated with the first party, the confirmation request comprising at least a portion of the message, wherein the message is delivered to the second party based on verification of information received in response to the confirmation request. However, Gupta discloses sending confirmation request to sender to confirm that it’s from a valid e-mail address prior to sending it to recipient (Gupta: [0079]; [0145]). It would have been obvious to one having ordinary skill in the art to request sender to confirm validity of the e-mail because they are analogous art involving e-mail communication system where legitimacy of sender is verified. The motivation to combine would be to ensure communication from sender is from trusted party instead of from auto-generated spam message systems.
Claims 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Osipkov in view of Dreller and further in view of Starink and further in view of Goodman et al. U.S. 2007/0039038 (hereinafter Goodman).
As per claim 21-23, Osipkov as modified discloses the limitations of claim 1. Osipkov as modified does not explicitly disclose wherein the display name of the first party matches the display name of the third party if the display name of the first party is the same, conceptually similar, or having a string distance below a threshold, as the display name of the third party. However, Goodman discloses determining display name of sender and legitimate sender by string comparison or visual similarity (Goodman: [0047]-[0049]). It would have been obvious to one having ordinary skill in the art to determine display name of sender by string comparison or visual similarity to detect suspicious display name because anti-spoofing techniques are well known in the art.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bruno et al. U.S. 8,271,588 discloses method for filtering fraudulent email messages.
Coomer U.S. 8,255,572 discloses method to detect and prevent e-mail scams.
Laudanski et al. U.S. 2012/0166458 discloses spam tracking analysis reporting system.
Cunningham U.S. 2011/0307567 discloses method for detecting and filtering unsolicited and undesired electronic messages.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIN HON (ERIC) CHEN whose telephone number is (571)272-3789. The examiner can normally be reached Monday to Thursday 9am- 7pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lynn Feild can be reached at 571-272-2092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHIN-HON (ERIC) CHEN/Primary Examiner, Art Unit 2431