DETAILED ACTION
This office action is based on the claim set filed on 05/27/2025.
Claims 1 and 8 have been amended.
Claims 2-7 has been canceled.
Claims 9-12 are new
Claims 1 and 8-12 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 05/27/2025 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1
Claim 1 and 8-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1 and 9-10 are drawn to an apparatus (system), and Claim 8 and 11-12 are drawn to a method, and each of which is within the four statutory categories (i.e., a machine and a process). Claims 1 and 8-12 are further directed to an abstract idea on the grounds set out in detail below.
Under step 2A, prong 1, The claimed invention represents an abstract idea of a series of steps that recite a process for assisting a psychological state of a subject. This abstract idea could have been performed mentally but for the fact that the claims recite a general-purpose computer processor and computing components to implement the abstract idea for steps citing a process directed to determining a psychological state by analyzing interaction of the subject with computing devices for which both the instant claims and the abstract idea are defined as certain methods of organizing human activity.
Independent Claim 1 recites the steps:
“an examinee VR terminal, which is a head-mounted display device configured to display VR contents for a psychological test to an examinee,
wherein the VR contents comprises a drawing screen, and the examinee VR terminal is configured to chase a gaze movement of the examinee, and when the examinee moves a gaze, provide the drawing screen in a corresponding direction and space, that is, in front of the examinee, and
wherein the examinee VR terminal comprises a hand glove motion sensor, and the examinee VR terminal is further configured to receive a user input response for the drawing screen through the hand glove motion sensor by chasing a movement of the hand glove motion sensor, and display a line in a movement direction of the chased movement on the drawing screen
a biosignal measurement sensor, which is a wearable device configured to measure biometric signals of the examinee during the psychological test of the examinee
an apparatus comprising a processor
wherein the processor is configured to:
provide, to the examinee VR terminal, the VR contents, which are for experiencing a standardized psychological test tool according to a type of the psychological test, wherein the standardized psychological test tool comprises a house-tree-person (HTP) drawing test that provides a test scene in front of the examinee through the examinee VR terminal
progress the psychological test through a user interaction with the VR contents by playing the VR contents on the examinee VR terminal and receiving a user interaction with the VR contents
receive the biometric signals from the biosignal measurement sensor, wherein the biometric signals comprise at least one biometric data of a heart rate, a body temperature, or a respiratory rate of the examinee;
collect test data of the standardized psychological test tool, of the examinee in association with the biomteric signals, and determine, through a biometric change, a particular part of a test item of the test data of the standardized psychological test tool, that causes an emotional response;
determine a psychological state of the examinee by analyzing the collected test data of the standardized psychological test tool with the biometric signals, through deep learning algorithm and reflecting the test item that causes the emotional response through changes of the collected biometric data, by using an artificial intelligence
wherein the processor is further configured to:
generate a test record sheet and a test result sheet of the examinee; and
transmit an access link of the test record sheet and the test result sheet to the examinee VR terminal by sending an email or a message to the examinee VR terminal.”
Independent Claim 8 recites the steps:
“providing, by the processor, to the examinee VR terminal, VR contents for a psychological test to an examinee, wherein the VR contents are for experiencing a standardized psychological test tool according to a type of the psychological test, the standardized psychological test tool comprises a house-tree-person (HTP) drawing test that provides a test scene in front of the examinee through the examinee VR terminal,
displaying, by the examinee VR terminal, the VR contents;
progressing, by the processor, the psychological test by playing the VR contents on the examinee VR terminal and receiving a user interaction with the VR contents,
wherein the VR contents comprises a drawing screen, and the examinee VR terminal chases a gaze movement of the examinee, and when the examinee moves a gaze, provides the drawing screen in a corresponding direction and space, that is, in front of the examinee, and
wherein the examinee VR terminal comprises a hand glove motion sensor, and the examinee VR terminal receives a user input response for the drawing screen through the hand glove motion sensor by chasing a movement of the hand glove motion sensor, and displays a line in a movement direction of the chased movement on the drawing screen;
measuring, by the biosignal measurement sensor, biometric signals of the examinee during the psychological test of the examinee, wherein the biometric signals comprise at least one biometric data of a heart rate, a body temperature, or a respiratory rate of the examinee;
receiving, by the processor, the biometric signals from the biosignal measurement sensor;
collecting, by the processor, test data of the standardized psychological test tool of the examinee in association with the biometric signals’
determining, by the processor, through a biometric change, a particular part of a test item of the test data of the standardized psychological test tool, that causes an emotional response;
determining a psychological state of the examinee by analyzing the collected test data of the standardized psychological test tool with the biometric signals, through deep learning algorithm and reflecting the test item that causes the emotional response through changes of the collected biometric data, by using an artificial intelligence
generating, by the processor, a test record sheet and a test result sheet of the examinee; and
transmitting, by the processor, an access link of the test record sheet and the test result sheet to the examinee VR terminal by sending an email or a message to the examinee VR terminal”.
These limitations, as drafted, given the broadest reasonable interpretation, cover performance of the limitations by a human user/actor interaction with computing device(s) that constitute certain methods of organizing human activity. For example, the limitations encompass a user to manually the ability to provide a user test data on a device and received data input of the user using to determine a psychological state of the user which are steps citing limitation encompass activity of a single person or multiple people and a computer, interacting with other users and with computing system(s) to perform the steps of the claimed invention, e.g., interacting, which constitutes Certain Methods of Organizing Human Activity. Accordingly, the claim limitations (in BOLD) recite an abstract idea. Any limitations not identified above as part of the process are deemed "additional elements," and will be discussed in further detail below.
Under step 2A, prong 2, this judicial exception is not integrated into a practical application because the remaining elements amount to no more than general purpose computer components programmed to perform the abstract ideas, linking the abstract idea to a particular technological environment. In particular, the claims recite additional element such as “artificial intelligence, virtual reality (VR) terminal, processor, biosignal sensor, hand glove motion sensor, drawing screen, wearable device, deep learning, processor”, that is/are disclosed at a high - level of generality and includes known hardware components to perform the steps, e.g., playing content on VR/displaying VR content, receiving a user interaction on VR, that implements the identified abstract idea that amounts to no more than adding the words "apply it" (or an equivalent) that is a mere instructions to “apply” the exception using a generic computer component, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP 2106.05(f), generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h), e.g., virtual reality (VR) and applying a ready trained artificial intelligence models without disclosing any specific steps or process for training the artificial intelligence or/and how the artificial intelligence is performing the steps of the claim, see (Applicant, para 23, 64, 65), and adding insignificant extra-solution activity to the judicial exception, i.e. transmit[ing]/send[ing], see MPEP 2106.05(d)(g), Symantec, TLI Communications LLC, and OIP Techs.. Accordingly, looking at the claims as a whole, individually and in combination, these additional elements provide no integration of the abstract ideas into a practical application because they appear to merely automate a manual process, such that no meaningful limits on practicing the abstract idea are introduced and the computing elements are merely utilized as tools to perform the abstract ideas. The claim as a whole is therefore directed to an abstract idea.
Under step 2B, the claims do not include additional elements that are sufficient to amount to "significantly more" than the judicial exception because as mentioned above, the additional elements amount to no more than generic computing components, recited at a high level of generality, that amount to no more than mere instruction to perform the abstract idea such that it amounts no more than adding the words "apply it" (or an equivalent) to apply the exception using generic computer component, see MPEP 2106.05(f), generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h), and adding insignificant extra-solution activity to the judicial exception, i.e. transmit[ing]/send[ing], see MPEP 2106.05(d)(g), Symantec, TLI Communications LLC, and OIP Techs.. Their collective functions merely provide conventional computer implementation and mere instructions to apply an exception using a generic computer component cannot provide an inventive concept, See Alice, 573 U.S. at 223 ("mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention."). The claims are not patent eligible.
Dependent Claims 9-12 include all of the limitations of claim(s) 1 and 8, and therefore likewise incorporate the above-described abstract idea. While the depending, claims add additional limitations as the following:
As for claims 9 and 11, the claim(s) recite limitations that are under the broadest reasonable interpretation, further define the abstract idea noted in the independent claim(s) that covers performance by a human interaction along with mental process but for, the recitation of the generic computer components which are similarly rejected because, neither of the claims, further, defined the abstract idea and do not further limit the claim to a practical application or provide an inventive concept such that the claims are subject matter eligible. The claim(s) recite additional elements such as “convolutional neural network (CNN)”. The additional element(s) have been interpreted to be computing components with a general - purpose processor which is disclosed at a high - level of generality and includes known hardware (or software) components recited in the claim(s) at a high level that it amounts to no more than mere instructions to “apply” the exception using a generic computer component, see MPEP 2106.05(f), and merely, uses the computer as a tool to perform the abstract idea, see MPEP 2106.05(h). Thus, the judicial exceptions recited in claims is/are not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept ("significantly more").
As for claims 10 and 12, the claim(s) recite limitations that are under the broadest reasonable interpretation, further define the abstract idea noted in the independent claim(s) that covers performance by a human interaction along with mental process but for, the recitation of the generic computer components which are similarly rejected because, neither of the claims, further, defined the abstract idea and do not further limit the claim to a practical application or provide an inventive concept. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept ("significantly more").
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 8-12 are rejected under 35 U.S.C. 103 as being unpatentable over Hwan et al. (Google machine translation KR101911516B1 “Hwan”) in view of LEE HEE JUNG (Google machine translation KR101772987B1- “Lee”) in view of Jones et al. (US 2020/0174553 A1 – “Jones”)
Regarding Claim 1 (Currently Amended), Hwan teaches a system for providing an artificial intelligence-based virtual reality VR psychological test service (Hwan: [Abs], [p. 6]), comprising:
an examinee VR terminal, which is a head-mounted display device configured to display VR contents for a psychological test to an examinee, Hwan discloses a method for obtaining psychological state data of a user via an interaction with the user virtual reality (VR) terminal (Hwan: [p. 1, 2])
wherein the VR contents comprises a drawing screen, and the examinee VR terminal is configured to chase a gaze movement of the examinee, and when the examinee moves a gaze, provide the drawing screen in a corresponding direction and space, that is, in front of the examinee, and
wherein the examinee VR terminal comprises a hand glove motion sensor, and the examinee VR terminal is further configured to receive a user input response for the drawing screen through the hand glove motion sensor by chasing a movement of the hand glove motion sensor, and display a line in a movement direction of the chased movement on the drawing screen
a biosignal measurement sensor, which is a wearable device configured to measure biometric signals of the examinee during the psychological test of the examinee Hwan discloses acquiring input data of a bio-signal measured by a sensor along with interaction with virtual reality (VR) data during a psychological test of a user (Hwan: [Abs], [p. 1- 2])
an apparatus comprising a processor (Hwan: [p. 2])
wherein the processor is configured to:
provide, to the examinee VR terminal, the VR contents, which are for experiencing a standardized psychological test tool according to a type of the psychological test, wherein the standardized psychological test tool comprises a house-tree-person (HTP) drawing test that provides a test scene in front of the examinee through the examinee VR terminal
progress the psychological test through a user interaction with the VR contents by playing the VR contents on the examinee VR terminal and receiving a user interaction with the VR contents Hwan discloses the user's psychological state corresponds to a predetermined condition and display VR contents provided through the display module of the user terminal psychological condition diagnostic item (Hwan: [p. 1, 5, 8])
receive the biometric signals from the biosignal measurement sensor, wherein the biometric signals comprise at least one biometric data of a heart rate, a body temperature, or a respiratory rate of the examinee; Hwan discloses acquiring input data of a bio-signal measured by a sensor such as heart rate along with interaction with virtual reality (VR) data for determining the psychological state of a user where the bio-signal measured by a sensor includes heart rate measurement (Hwan: [Abs], [p. 1, 2, 4])
collect test data of the standardized psychological test tool, of the examinee in association with the biomteric signals, and determine, through a biometric change, a particular part of a test item of the test data of the standardized psychological test tool, that causes an emotional response; Hwan discloses collecting biometric data to evaluate user stress and determine psychological state of the user (Hwan: [p. 1-3])
determine a psychological state of the examinee by analyzing the collected test data of the standardized psychological test tool with the biometric signals, through deep learning algorithm and reflecting the test item that causes the emotional response through changes of the collected biometric data, by using an artificial intelligence Hwan discloses evaluating the user interaction with VR content and bio-signal to determine the user psychological state the mental state of the user where the evaluation process using test diagnostic tools and using artificial intelligence where the artificial intelligence applies a deep learning based algorithm (Hwan: [p. 2-4, 8]).
wherein the processor is further configured to:
generate a test record sheet and a test result sheet of the examinee; Hwan discloses generating test and diagnosis results (Hwan: [p. 7-8]
transmit an access link of the test record sheet and the test result sheet to the examinee VR terminal by sending an email or a message to the examinee VR terminal Hwan discloses the service server receives the analysis result of the diagnostic data of the user created and transmitted, and transmits the analysis result to the user using conventional mail transmission or e-mail transmission, and may transmit a diagnostic result through a dedicated application installed in the personal terminal (Hwan: [p. 9]).
However, Hwan does not expressly disclose the features as underlined.
Lee teaches
contents comprises a drawing screen, (Lee: [Fig. 3, 5-7], [p. 4])
experiencing a standardized psychological test tool according to a type of the psychological test, wherein the standardized psychological test tool comprises a house-tree-person (HTP) drawing test that provides a test scene in front of the examinee through the examinee VR terminal Lee discloses picture psychological examination (HTP) [standardized psychological test tool] (Lee: [Fig. 3-5-7], [p. 2, 3, 4])
test record sheet and a test result sheet Lee discloses screen to be printed comprising examine data and results (Lee: [Fig. 2, 15-18],)
Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the virtual reality images for psychological counseling displayed in a consulting space in Hwan to incorporate psychological test tool comprises a house-tree-person (HTP) drawing test, as taught by Lee which helps invisible psychological state to be objectively quantified or qualitatively analyzed and used as a means for diagnosing an individual's psychological symptoms (Lee: [p. 2]).
Jones teaches
wherein the VR contents comprises a drawing screen, and the examinee VR terminal is configured to chase a gaze movement of the examinee, and when the examinee moves a gaze, provide the drawing screen in a corresponding direction and space, that is, in front of the examinee Jones discloses a Hybrid-reality (HR) that merges real-world imagery with imagery created in a computer and using a virtual and augmented reality (VR) and (AR) head mounted display (HMD) system to receive the images, virtual object, objects, or scene on a transparent screen present on the HMD where the HMD comprises an eye tracking measuring the user gaze and use gaze direction as input interaction with virtual menus rendered on the display (Jones: [0020], [0041], [0062], [0068-0069])
wherein the examinee VR terminal comprises a hand glove motion sensor, and the examinee VR terminal is further configured to receive a user input response for the drawing screen through the hand glove motion sensor by chasing a movement of the hand glove motion sensor, and display a line in a movement direction of the chased movement on the drawing screen Jones discloses the HMD system comprises a glove motion sensor that tracks hand/finger movement and provided as an input where HMD may display a line to indicate movement direction (Jones: [0020], [0040-0041], [0056], [0064])
Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the virtual reality images for psychological counseling displayed in a consulting space in Hwan and Lee to incorporate tracking a user of VR gaze and direction and using glove to track movement and direction, as taught by Jones which helps providing feedback that mimic the movement of the user that improves effectiveness of the system (Jones: [0049]).
Regarding Claim 2 (Canceled).
Regarding Claim 3 (Canceled).
Regarding Claim 4 (Canceled).
Regarding Claim 5 (Canceled).
Regarding Claim 6 (Canceled).
Regarding Claim 7 (Canceled),
Regarding Claim 8 (Currently Amended), Hwan teaches a method for providing an artificial intelligence based virtual reality VR psychological test service, performed by a system comprising an examinee VR terminal, which is a head-mounted display device, a biosignal measurement sensor, which is a wearable device, and an apparatus comprising a processor Hwan: [Abs], [Fig. 1] [p. 1-4, 6]), the method comprising:
the claim limitations is/are analogous to the limitations in Claim 1. As such, claim 8 is/are rejected for substantially the same reasons given for claim 1, and is incorporated herein.
Regarding Claim 9 (New), the combination of Hwan, Lee, and Jones teaches the system according to claim 1, wherein the processor is further configured to, in determining the psychological state of the examinee, by using a convolutional neural network (CNN):
classify a top-level item from a drawing image recorded in the test record sheet;
classify next-level items from the classified top-level item;
classify last-level items from the classified next-level items; and
analyze the test data through a pre-learned model for each of the last-level items
Hwan discloses deep learning platform for classifying and analyzing input data (Hwan: [p. 2, 4]). Lee discloses in order to recognize an image, the psychological testing server may use a classification method using a Convolutional Neural Network (CNN) as a whole and a hierarchical structure is divided into three levels to recognize objects in order of topmost [top-level] items first classified in the scanned image and then classified into next [next-level] items and the third [last] items classification (Lee: [p. 4]).
Therefore, it would be obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have the virtual reality images for psychological counseling displayed in a consulting space in Hwan to incorporate psychological test tool comprises a house-tree-person (HTP) drawing test analyzed using CNN and classified into three levels items, as taught by Lee which helps invisible psychological state to be objectively quantified or qualitatively analyzed and used as a means for diagnosing an individual's psychological symptoms (Lee: [p. 2]).
Regarding Claim 10 (New), the combination of Hwan, Lee, and Jones teaches the system according to claim 9, wherein the top-level item comprises a house shape, a tree shape, and a person shape, the next-level items comprise a window shape, a chimney shape, a thatched house shape, and a tile-roofed shape, and the last-level items comprise a number of windows, and a presence of bars in a window
Lee discloses in order to recognize an image, the psychological testing server may use a classification method using a Convolutional Neural Network (CNN) as a whole and a hierarchical structure is divided into three levels to recognize objects in order of topmost [top-level] items comprising a house shape, a wood shape, and a human shape that is/are first classified in the scanned image and then classified into next/second [next-level] items comprising a house shape, a window shape, a chimney shape, shape of a roof house and a second house shape, and finally, the third [last] items classification comprising, how many windows are there, and if there is a window on the window [bars] (Lee: [p. 4]).
The motivations to combine the above-mentioned references are discussed in the rejection of claim 9, and incorporated herein.
Regarding Claim 11-12 (New), the claims limitations is/are analogous to the limitations in Claim 9-10. As such, claims 11-12 is/are rejected for substantially the same reasons given for claim 9-10, and is incorporated herein.
Response to Amendment/Argument
Applicant's arguments filed 05/27/2025 have been fully considered by the Examiner and addressed as the following:
In the remarks, Applicant argues in substance:
Applicant's arguments with respect to the 35 U.S.C. § 101 rejection on page 6-7.
On page 6 of the remarks, Applicant argues that “Independent claims 1 and 8 are amended to clarify that the claimed subject matter is directed to interworking operations between hardware devices and non-mental operations”, Examiner respectfully disagree. Although the claims, under BRI, were not analyzed as mental operation, the claims were identified to be directed to certain methods of organizing human activity citing a process for collecting, analyzing data, and determine a psychological state which is a process that can be implemented by a human interacting with computing device(s).This is clearly as described by the Applicant in the argument that “Applicant submits that the subject matter of amended independent claims 1 and 8 should be considered as the interworking operations between physical devices”, which is clearly describing a process of interacting between one or more users and devices that falls under the grouping of Certain Methods of Organizing Human Activity (e.g., interactions between people, and following rules or instructions), see MPEP 2106.04(a)(2)(II)(C). Additionally, the claim(s) recite additional elements and these additional elements are merely implemented as a tool such that merely uses the additional element(s) as a tool to perform an abstract idea, generally linking the use of the judicial exception to a particular technological environment or field of use, and adding insignificant extra-solution activity to the judicial exception to apply the exception using generic computer component and mere data gathering process that does not add a meaningful limitation to the above abstract idea as such do not integrate the abstract idea into a practical application, see MPEP 2106.05(d)-(f)-(g)-(h).
On page 7 of the remarks, Applicant argues that “Applicant submits that the above features should be considered as additional elements that integrate the judicial exception into a practical application of the claimed subject matter, by providing a satisfactory artificial intelligence based VR psychological test service... Thus, accurate test results can be provided”, Examiner respectfully disagree. As mentioned above, the claim recites additional elements recited as tool(s) to perform the abstract idea for determining a psychological state, Examiner asserts that neither the claim and/or the specification discloses the additional elements providing any improvement to a health care technology, improvement to a computing system or any technological field, but rather is describing a solution for treatment of acrophobia and posttraumatic stress disorder (PTSD) while using a computing component(s) for implementing the identified abstract idea.
Accordingly, Examiner remains the 101 rejections which have been updated to address Applicant's amendments and remarks.
Applicant's arguments with respect to the 35 U.S.C. § 103 rejection on page 8-10.
On page 9 of the remarks, Applicant argued “The cited references fail to disclose the above features”, Examiner respectfully disagree. Although the reference Gwan discloses an HTP test, Examiner has introduced a new reference “LEE HEE JUNG” describing in more details the use of the (HTP) as standardized psychological test tool.
Applicant further argues that “Gwan, Hwan and Jones, considered either alone or in combination, fails to disclose, suggest, or otherwise render obvious the features of “generate a test record sheet and a test result sheet of the examinee; and transmit an access link of the test record sheet and the test result sheet to the examinee VR terminal by sending an email or a message to the examinee VR terminal”, Examiner respectfully disagree. Although the feature(s) disclosed in Gwan, the new reference “LEE HEE JUNG” describes the test record and results sheet whereas while the transmission is a new added feature sending to e-mail or message the sheet, Examiner has cited a new section in Hwan teaching the new argued limitation.
Hence, the Examiner find the Applicant argument(s) is/are moot.
Prior Art Cited but not Applied
The following document(s) were found relevant to the disclosure but not applied:
KR102234241B1 “KIM” discloses an image analysis system for analyzing a psychological state of an applicant based on receiving image data drawn on a preset subject from an applicant.
KR-102240485-B1 “KIM” discloses an image analysis system for analyzing a psychological state of an applicant based on receiving image data drawn on a preset subject from an applicant.
The references are relevant since it discloses assessment of a user psychological state using drawing images.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAAELDIN ELSHAER whose telephone number is (571)272-8284. The examiner can normally be reached M-Th 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MAMON OBEID can be reached at Mamon.Obeid@USPTO.GOV. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALAAELDIN M. ELSHAER/Primary Examiner, Art Unit 3687