DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 7/29/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
Applicant’s arguments, see page 16, filed 12/19/2025, with respect to objections to the abstract have been fully considered and are persuasive. The objection to the abstract has been withdrawn.
Applicant’s arguments, see page 17, filed 12/19/2025, with respect to objections to the drawings have been fully considered and are persuasive. The objections to the drawings have been withdrawn.
Applicant’s arguments, see page 17, filed 12/19/2025, with respect to objections to claims 1-7 have been fully considered and are persuasive. The objections to the claims have been withdrawn.
Applicant’s arguments, see pages 17-23, filed 12/19/2025, with respect to the rejection of claims 1-20 under 35 USC 112(a) have been fully considered and are persuasive. These rejections have been withdrawn.
Applicant’s arguments, see pages 23-25, filed 12/19/2025, with respect to the rejection of claims 1-20 under 35 USC 112(b) have been fully considered and are persuasive. These rejections have been withdrawn.
Applicant’s arguments, see pages 25-32, filed 12/19/2025, with respect to the rejection of claims 1-20 under 35 USC 101 have been fully considered and are persuasive. These rejections have been withdrawn.
Applicant’s arguments, see pages 33 and 34, filed 12/19/2025, with respect to the rejection of claims 1-20 under 35 USC 102(a)(2) and 103 have been fully considered. As the scope of the claims has been altered by substantial amendments, these rejections have been withdrawn, and new ground(s) of rejection is made in view of WELDEMARIAM et al (Doc ID US 20200387603 A1), BAO (Doc ID US 10715604 B1), DESHPANDE et al (Doc ID US 11348601 B1), RODRIGUEZ (Doc ID US 20250200173 A1), ZHAO et al (Doc ID US 20250208000 A1), QUY (Doc ID US 20210118323 A1), HA et al (Doc ID US 20210117717 A1), KHANZADA (Doc ID US 20220037022 A1), and ASAF (Doc ID US 20170293874 A1).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention.
Regarding claims 1, 8, and 15:
Claim 1 recites, “synchronizing … communication data associated with each of the multiple communication modalities to determine a relational context of a conversation flow …”. Claims 8 and 15 recite similar language. This limitation is not supported by the original disclosure, and thus constitutes new matter. Paragraph 0079, inter alia, of the specification teaches, “Each modality’s data stream is timestamped and synchronized to preserve the relational context of the conversation flow …” (emphasis added). The specification is explicit that “relational context” is not determined by the invention. It is merely “preserved” as a byproduct of the synchronization recited in this limitation and the timestamps of another. This rejection can be overcome by amending the claims such that they recite only that subject matter which is explicitly supported by the specification.
Regarding claims 2-7, 9-14, and 16-20:
They are dependent on one or more rejected claims, and thus inherit those rejections. This rejection could be overcome by overcoming the rejection(s) to any claims upon which these claims depend, or by amending the claims such that they are no longer dependent on any rejected claim.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Regarding claims 1, 8, and 15:
Claim 1 recites, “… correlate the preprocessed data across the multiple communication modalities for temporal alignment of communication events in the communication data and identity verification of the user …”. Claims 8 and 15 recite similar language. The claims are indefinite because the element of temporal alignment of communication events in the identity verification of the user lacks antecedent basis. It is unclear to what identity verification is being referred, as no step of verification has been previously recited in the claim. This rejection can be overcome by amending the claims such that an identity verification of the user is positively claimed.
Regarding claims 5, 12, and 19:
Claim 5 recites, “… analysis of physical characteristic data …”. Claims 12 and 19 recite similar language. The claims are indefinite because the term “physical characteristics” lacks antecedent basis. It is unclear to which “physical characteristics” are being referred. One of ordinary skill in the art may infer the term refers to speech, facial, or typing characteristics, the claims have not previously provided any basis by which to determine whether one, some, or all of these characteristics are to be analyzed as part of the identity verification. This rejection can be overcome by amending the claims such that the characteristics to be considered “physical characteristics” are positively claimed.
Regarding claims 2-4, 6, 7, 9-11, 13, 14, 16-18, and 20:
They are dependent on one or more rejected claims, and thus inherit those rejections. This rejection could be overcome by overcoming the rejection(s) to any claims upon which these claims depend, or by amending the claims such that they are no longer dependent on any rejected claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 8, 11, 15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over WELDEMARIAM et al (Doc ID US 20200387603 A1), and further in view of ZHAO et al (Doc ID US 20250208000 A1), DESHPANDE et al (Doc ID US 11348601 B1), and RODRIGUEZ (Doc ID US 20250200173 A1).
Regarding claim 1:
WELDEMARIAM teaches:
A system for integrative analysis of multimodal communication features for misappropriation detection, wherein the system is further configured for alignment of user data across multiple communication modalities for reinforcing security of electronic communications, the system comprising: a processing device; a non-transitory storage device containing instructions when executed by the processing device, causes the processing device to perform the steps of ([0003] "A system … may include a hardware processor." and [0005] "A computer readable storage medium storing a program of instructions executable by a machine to perform …"):
receive, via application programming interfaces (APIs) associated with a plurality of communication platforms, communication data associated with a user during a user process from multiple communication modalities ([0102] "Examples of a protocol for sharing information between devices may include, … Application Programming Interface (API) …" and [0026] "... inputs received from a ... image ...; analyzing ... text messages, notifications, and emails received), joint analysis of phone calls received ..."),
wherein the communication modalities comprise data associated with spoken communication, written text communication, and non-verbal communication cues, wherein the communication data comprises a plurality of data formats ([0026] "... analyzing real-time interaction, engagement pattern or sequence, facial expression ...; analyzing … conversations (e.g., text messages, notifications, and emails received), joint analysis of phone calls received …"),
wherein receiving the communication data further comprises capturing real time data via multi-core processors and high-bandwidth network interfaces ([0101] "... detecting situational context of the user and user cognitive context may involve further analyzing data received from a plurality of data sources such as ... data a communication device (e.g., one or more of Wi-Fi gateway, Beacon, network device, etc.) can capture (e.g., via a camera, microphone), etc.");
preprocess the received communication data in the plurality of data formats to a standardized format to generate preprocessed data, wherein preprocessing the communication data further comprises standardizing data structures in the communication data ([0082] "… Raw data 302 can be received and in a data preprocessing step 304, the received data can be labeled, for example, for preparation for supervised learning."); and
Examiner notes it is well-known in the art that in the context of machine learning (ML), pre-processing input data is a necessary step, to include transforming the input data into a structured format which can be processed by the model. One of ordinary skill in the art, where the prior art recites the act of pre-processing, would consider this structured format as implicit and inherent.
extracting, via a graphics processing unit (GPU), one or more communication patterns associated with the user from the communication data, wherein the one or more communication patterns comprise speech intonation, typing speed, and facial expressions ([0103] "FIG. 5 is a diagram showing … hardware processors 502 such as … a graphic process unit (GPU) …", [0037] "... The device monitors ... way of typing, speed ...", and [0086] "… The user's cognitive state can be estimated based on reading or detecting user facial expression ..., detecting voice volume, tone or another speech attribute ... and/or another characteristic.");
analyze the preprocessed data associated with each of the multiple communication modalities, via a machine learning model, to identify one or more behavioral deviations in the one or more communication patterns in comparison with historical behavior patterns associated with the user ([0079] "The device risk analyzer 236 may determine one or more possible risks ... based on a user state and past history of user actions on the device. For example, a … machine learning model trained using past history can be run.");
performing … multi-tiered analysis of the preprocessed data comprising (i) parsing video data for facial micro-expressions ([0086] "… The user's cognitive state can be estimated based on reading or detecting user facial expression ..."),
Examiner notes that the broadest reasonable interpretation of “multi-tiered analysis” encompasses the combination of analyses of the limitations labelled (i)-(iii).
ZHAO teaches the following limitation(s) not taught by WELDEMARIAM:
correlate the preprocessed data across the multiple communication modalities for temporal alignment of communication events in the communication data and identity verification of the user, wherein correlating the preprocessed data further comprises: constructing timestamps for communication data associated with each of the multiple communication modalities ([0137] "... timestamps and calibration algorithms are used to achieve clock synchronization to ensure that clocks of multiple devices or systems remain consistent.");
synchronizing, via a predetermined clock synchronization protocol, communication data associated with each of the multiple communication modalities to determine a relational context of a conversation flow across the multiple communication modalities ([0137] "… As shown in FIG. 6, by adopting network time protocol (NTP), a clock of a computer can be synchronized to Universal Time Coordinated (UTC) ..." and [0146] According to the embodiments of the present disclosure, the plurality of pieces of eye movement data, physiological data, behavior data, and voice data of the driver can be synchronized.); and
Collecting user data including spoken, written, and non-verbal data, pre-processing the data so as to standardize it for use in training an ML model, extracting user behavioral features based on their manner of speech, typing speed, and facial expressions, and identifying deviations from historical user behavior are known techniques in the art, as demonstrated by WELDEMARIAM. Further, synchronizing various types of user data through the use of timestamps and a standard clock protocol is a known technique in the art, as demonstrated by ZHAO. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM with the data synchronization of ZHAO with the motivation to ensure that data collected through different means at the same time are correctly aligned based on the time each data point was collected.
WELDEMARIAM also teaches:
performing, in parallel, … analysis of the preprocessed data (Fig. 3 illustrates the parallel processing of preprocessed data, and [0142] "... each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). ... two blocks shown in succession may, in fact, be accomplished ... in a partially or wholly temporally overlapping manner ...").
Performing allays steps through the use of parallel processing is a well-known technique in the art, as demonstrated by WELDEMARIAM. The prior art shows, though not necessarily in the same embodiment as utilized as the primary reference above, the performance of parallel processing in the citations provided and implicitly in the disclosed use of a GPU, whose parallelization abilities are one of the primary reasons GPUs are used in ML applications. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM and ZHAO with the parallel processing of WELDEMARIAM with the motivation to take advantage of the processing abilities of a multiprocessor architecture such as a GPU to perform the required analyses as quickly as possible.
DESHPANDE teaches the following limitation(s) not taught by the combination of WELDEMARIAM and ZHAO:
(ii) performing spectral analysis of audio data to identify predetermined stress markers(Col 19 lines 2-4 "The voice characteristics data 605 may also indicate other data 708 such as whether fear was detected from the audio data …"), and
(iii) analyzing content of a conversation associated with the user process to detect predetermined anomalies in user speech patterns comprising user hesitation in the conversation (Col 15 lines 41-46 "the user may … have provided permission to the system(s) 120 to record and analyze his or her voice/conversations to determine voice characteristics …", col 15 lines 57-58 "The frame feature vector(s) 514 may be derived by spectral analysis of the audio data 211.", and col 16 lines 20-22 "... process audio frame features and/or utterance level features to determine an uncertainty level."); and
Analyzing a spectrogram of a user’s speech to identify anomalies such as hesitation in speech is a known technique in the art, as demonstrated by DESHPANDE. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM and ZHAO with the anomalous speech data detection of DESHPANDE with the motivation to utilize features of a user’s speech data through more than only language used, to better indicate intent and emotional state of the user.
RODRIGUEZ teaches the following limitation(s) not taught by the combination of WELDEMARIAM, ZHAO, and DESHPANDE:
in response to detecting that the preprocessed data is associated with a potential misappropriation event, insert one or more additional verification steps within the user process for additional identity verification of the user, without impeding the user process ([0086] "... When multimedia data ... does not pass the behavioral biometric analysis, the electronic device 10 may require additional authentication challenges before the user is permitted to conduct a desired transaction.").
Adding steps to the verification of a user’s identity based on anomalous findings in their behavior is a known technique in the art, as demonstrated by RODRIGUEZ. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM, ZHAO, and DESHPANDE with the increased security measures of RODRIGUEZ with the motivation to make verification of a user more stringent in the event they are found to be suspicious in some way.
Regarding claim 4:
The combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ teaches:
The system of claim 1, wherein determining the relational context further comprises chronologically aligning facial expressions, vocal stresses and conversational content in the communication data associated with the user (ZHAO [0137] "… As shown in FIG. 6, by adopting network time protocol (NTP), a clock of a computer can be synchronized to Universal Time Coordinated (UTC) ..." and [0146] “According to the embodiments of the present disclosure, the plurality of pieces of eye movement data, physiological data, behavior data, and voice data of the driver can be synchronized.”).
Synchronizing captured images of facial features and voice data is a known technique in the art, as demonstrated by ZHAO. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ with the data synchronization of ZHAO with the motivation to ensure that data collected through different means at the same time are correctly aligned based on the time each data point was collected.
Regarding claims 8, 11, 15, and 18:
These claims are rejected with the same justification, mutatis mutandis, as their counterpart claims 1 and 4 above.
Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over WELDEMARIAM et al (Doc ID US 20200387603 A1), ZHAO et al (Doc ID US 20250208000 A1), DESHPANDE et al (Doc ID US 11348601 B1), and RODRIGUEZ (Doc ID US 20250200173 A1) as applied to claims 1, 8, and 15 above, and further in view of QUY (Doc ID US 20210118323 A1).
Regarding claim 2:
The combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ teaches:
The system of claim 1,
QUY teaches the following limitation(s) not taught by the combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ:
wherein receiving the communication data further comprises: capturing, via digital signal processing hardware, audio and video data ([0037] "... the biosensors may be in the form of a camera to monitor facial expressions, eye movements .... Similarly, the biosensor may be in the form of a microphone to monitor voice features or speech data …"); and
performing noise reduction and signal enhancement of the captured data ([0046] "… The biosensor sends the signal to a SPU (step 106) which amplifies the signal and reduces artifact and noise in the signal (step 108).").
Performing noise reduction and signal enhancement of captured audio and video data is a known technique in the art, as demonstrated by QUY. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ with the data cleaning of QUY with the motivation to utilize the highest quality data both in training the ML model and in detection to prevent both false positives and false negatives due to noisy data.
Regarding claims 9 and 16:
These claims are rejected with the same justification, mutatis mutandis, as their counterpart claim 2 above.
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over WELDEMARIAM et al (Doc ID US 20200387603 A1), ZHAO et al (Doc ID US 20250208000 A1), DESHPANDE et al (Doc ID US 11348601 B1), and RODRIGUEZ (Doc ID US 20250200173 A1) as applied to claims 1, 8, and 15 above, and further in view of HA et al (Doc ID US 20210117717 A1), KHANZADA (Doc ID US 20220037022 A1), and ASAF (Doc ID US 20170293874 A1).
Regarding claim 3:
The combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ teaches:
The system of claim 1,
HA teaches the following limitation(s) not taught by the combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ:
wherein preprocessing the received data further comprises resizing video frames ([0061] "During preprocessing, unstructured image raw data is transformed before it is fed to the image processing models. ... Exemplary preprocessing techniques include aspect ratio standardizing …"),
KHANZADA teaches the following limitation(s) not taught by the combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ:
normalizing bitrate associated with audio streams([0042] "In some embodiments, multiple features from the crowdsourced datasets may be used to train the models. ... Each voice audio file may be resampled to 22.5 kHz …"), and
ASAF teaches the following limitation(s) not taught by the combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ:
removing redundant information in the communication data ([0071] "... Text input 157 is pre-processed 158 by performing tokenization, ...; after which duplicates are removed, repeated letters and alphanumeric inputs are normalized 159 …").
Standardizing the aspect ratio of video data, the bitrate of audio data, and removing redundant data from a dataset used in ML analysis is a well-known technique in the art, as demonstrated by HA, KHANZADA, and ASAF, respectively. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ with the data standardization of each of HA, KHANZADA, and ASAF with the motivation to use a standardized format for training and using the ML model. Normalization or standardization of data in this context is well-known in the art and considered an inherent step in training of an ML model.
Regarding claims 10 and 17:
These claims are rejected with the same justification, mutatis mutandis, as their counterpart claim 2 above.
Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over WELDEMARIAM et al (Doc ID US 20200387603 A1), ZHAO et al (Doc ID US 20250208000 A1), DESHPANDE et al (Doc ID US 11348601 B1), and RODRIGUEZ (Doc ID US 20250200173 A1) as applied to claims 1, 8, and 15 above, and further in view of BAO (Doc ID US 10715604 B1).
Regarding claim 5:
The combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ teaches:
The system of claim 1,
BAO teaches the following limitation(s) not taught by the combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ:
wherein correlating findings across communication modalities includes analysis of physical characteristic data to determine the identity verification of the user (Col 8 lines 61-63 "The vision component 408 can perform facial recognition or image analysis to determine an identity of a user ...").
Identifying a user through their physical characteristics is a known technique in the art, as demonstrated by BAO. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ with the facial recognition of BAO with the motivation to provide identification of a user as part of a verification of their identity and intent.
Regarding claims 12 and 19:
These claims are rejected with the same justification, mutatis mutandis, as their counterpart claim 5 above.
Claims 6, 7, 13, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over WELDEMARIAM et al (Doc ID US 20200387603 A1), ZHAO et al (Doc ID US 20250208000 A1), DESHPANDE et al (Doc ID US 11348601 B1), and RODRIGUEZ (Doc ID US 20250200173 A1) as applied to claims 1, 8, and 15 above, and further in view of THAMPY (Doc ID US 20190068627 A1).
Regarding claim 6:
The combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ teaches:
The system of claim 1,
deploy the machine learning model across the multiple communication modalities, wherein the machine learning model determines real-time misappropriation detection (WELDEMARIAM [0064] "... a context module 204 may receive as input user's cognitive and physical signals 202. The context module 204 may also receive context information such as current location of a user and/or a device, image, voice or textual information …").
THAMPY teaches the following limitation(s) not taught by the combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ:
wherein the system is further configured to: tune the machine learning model by adjusting algorithm parameters and integrating new data sets for training the machine learning model ([0131] "After one or more flagged events ..., the information can be provided back to one or more machine learning algorithms to automatically modify parameters of the system."); and
Tuning an ML model to refine its parameters is a known technique in the art, as demonstrated by THAMPY. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ with the ML tuning of THAMPY with the motivation to ensure that new data is incorporated in the ML model’s training so that accurate results will continue to be output as situations with users evolve over time.
Regarding claim 7:
The combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ teaches:
The system of claim 1,
THAMPY teaches the following limitation(s) not taught by the combination of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ:
The system of claim 1, wherein the system is further configured to: generate an alert for the potential misappropriation event comprising details of detected anomalies and suggestions for subsequent actions ([0134] "… a recommendation engine tracks user activity for anomalous behavior …. An alarm can be sounded with details of the event and recommendations for remediation.").
Generating an alert when a deviation in user behavior is detected is a known technique in the art, as demonstrated by THAMPY. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the detection of deviations in user behavior of WELDEMARIAM, ZHAO, DESHPANDE, and RODRIGUEZ with the user behavioral deviation alert of THAMPY with the motivation to ensure that system administrators are aware of the deviation in behavior and so they can take any necessary steps to ameliorate the behavior. This is obvious where a system does not otherwise automatically institute other mitigating actions.
Regarding claims 13, 14, and 20:
These claims are rejected with the same justification, mutatis mutandis, as their counterpart claims 6 and 7 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
LIN et al (Doc ID WO 2024080970 A1) teaches a very similar invention that collects and analyses the same information about a user (facial expressions, typing characteristics, and speech). However, LIN performs an analysis in the context of attempting to alter the suer’s emotional state, rather than detect a deviation in past behavior.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON BINCZAK whose telephone number is (703)756-4528. The examiner can normally be reached M-F 0800-1700.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached on (571) 270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BB/Examiner, Art Unit 2437
/BENJAMIN E LANIER/Primary Examiner, Art Unit 2437