Prosecution Insights
Last updated: April 19, 2026
Application No. 18/633,085

DISPLAY DEVICE AND METHOD FOR CONTROLLING THE SAME

Non-Final OA §101§102§103
Filed
Apr 11, 2024
Examiner
HAUK, EMILY ROSE
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+38.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
8 currently pending
Career history
10
Total Applications
across all art units

Statute-Specific Performance

§101
27.3%
-12.7% vs TC avg
§103
45.5%
+5.5% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-15 are rejected under 35 U.S.C 101 because the claimed invention is directed to judicial exception, in particular an abstract idea falling under mental processes without significantly more. Th judicial exception is not integrated into practical application because the additional limitation provides insignificant extra-solution activity and simply implanting the abstract idea on generically recited computer elements. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception as receiving, transmitting, and presenting data is well understood, routine, conventional computer functions as recognized by court decisions listen in MPEP 2106.05(d). Step 1: the claims in question are primarily to an apparatus/system. The corresponding apparatus is congruent in scope and understood to be directed to a machine for the purposes of analysis at Step 1. (Step 1: YES) Step 2A. Prong One: Step 2A Prong One of the eligibility analysis evaluates whether the claim recites judicial exception (Law of Nature, a Natural Phenomenon, or an Abstract Idea). MPEP 2106.4, subsection II, states a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Claim 1 at a high level recites the use of display (element A), a communication circuitry (element B), a memory (element C), a processor (element D), display an image based on received information (element E), identify feature points and an omega shape (element F), identify if there is a predetermined area with biometric information in the image (element G), adjust display parameters (element H). Element F recites identifying feature points and shape of a patient, which encompasses evaluating and providing a judgement which could be practically performed in the human mind. Element G recites identifying whether an image contains an area with biometric information, which encompasses evaluating and providing a judgment which could be practically by performed in the human mind. Element H recites using a previous judgment to adjust a display parameter which be performed by the human mind. Such mental observations, evaluation, and judgements fall under mental grouping processing (performed in the human mind including observations, evaluations, judgements, and opinions). For example a medical professional could look at an image a patient identify feature points of the face and identify the omega shape of the patient as the head and shoulder shape, the medical professional could then use the features and shape of the patient to identify an area of the patient to observe and understand the biometric information associated with the area, and lasted based on the judgements of feature points, omega shape, and detection area the medical professional could adjust the display or image to better perform their medical evaluations. Claim 2 at high level includes identifying a center pole, a right side of the patient, a left side of the patients, determining if the left feature point is on the left side of the patient, determining if the right feature point is on the right side of the patient, map feature points to corresponding opposite side feature points. Claim 2 encompasses observation and judgement to observe the patient and judge areas and feature points of the face, all of which could be performed by a medical professional evaluating an image of a patient and determining or mapping regions and feature points on to the image to determine where each region of the patient is and using those regions to map feature points on opposite sides of the image. Claim 3 at high level recited transmitting information to the external device based on identifying if the image is missing a feature point or the omega shape. The element of identifying if the image is missing a feature point of the omega shape encompasses a judgement which could be performed by the human mind, for example a medical expert could identify from an image that there are feature points missing from the image. Claim 4 at a high recites adjusting the display based on information received. Adjusting the display based on information encompasses an evaluation and a judgement which could be perform by the human mind as seen by a medical expert evaluate an image and changing settings or parameters to improve the evaluation quality. Claim 5 at high level recites transmitting information based on information received, which encompasses judgement. Claim 6 at a high level recites identifying brightness of the image and adjusting the brightness, which encompasses evaluation and judgement. Claim 7 at a high level recites receiving data, detecting biometric information, obtaining a diagnosis, transmitting the result. Detecting biometric information and obtaining a diagnosis encompass observation, evaluation, and judgement all which could be completed by a medical professional. Claim 8 at a high level recites identify if the image is backlit and adjusting the display parameter, which encompasses observation, evaluation, and judgement that could be performed by the human mind. Claim 9 at a high level recites receiving data, identify illuminance levels, identify if the image is backlit, and transmit data. Identifying illuminance level and identifying if the image is backlit encompasses observation, evaluation, and judgement of the lighting of the image that could be performed in the human mind. Claim 10 at a high level recites the features points comprise of the eyes, mouth, and noise, and the detection area is the orbital area and infraorbital area of the face, which encompasses observation and evaluation for the segmenting of the image into feature points and areas that could be done in the human mind, for example a medical professional to identify facial anatomy. Claim 11 at high level recites determine a coefficient based on received data, and transmitting data. Determining a coefficient encompasses evaluation that could be performed by the human mind. Claims 12-15 similarly contain mental processes of observation, evaluation, judgements, and opinions. (Step 2A, Prong One: YES). Step 2A. Prong Two: Step 2A, Prong Two of the eligibility analysis evaluates whether the claim as a whole integrates the recites judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). ‘Additional elements’ are generally limitations are generally feature/limitations/steps recited in the claim beyond the judicial exceptions. MPEP 2106 comprises of limitations that indicate whether or not there is integration. MPEP 2106.05(a) improvements, (b) particular machine, (c) particular transformation, and (e) other meaningful limitations generally concern limitation that are indicative of integration, whereas 2106.05(d) well-understood, routine, conventional activity, (f) mere instructions to apply, (g) insignificant extra-solution activity, and (h) field of use generally concern limitations that are not indicative of integration. Element E recites “receiving” and displaying information which amounts to mere data gathering and presenting at a high level of generality, and thus are insignificant extra-solution activity. MPEP 2106.05(g) states insignificant extra-solution activity is generally understood as activities incidental to the primary process or product that are merely nominal or tangential addition to the claim (examples include transmitting, storing, and outputting information). Elements A-D recites a display, communication circuit, memory, and a processor. A display, communication circuit, and memory and processor with instruction are recited at a high level of generality amount to no more than instruction to implement an abstract idea on generic computer elements. The computer elements (elements A-D) are generally applied to the abstract ideas without limiting the computer elements. Claims 3, 5, 7, 9, 12, 13, and 15 similarly recite receiving and transmitting data, which amount to mere data gathering and presenting at a high level of generality, thus are insignificant extra-solution activity. Claim 2-9 and 11-13 recite the use of a processor, memory, and/or communication circuitry, which is recited at a high level of generality and amounts to more than generic computer elements to implement abstract ideas. Claim 14 does not include additional elements beyond abstract ideas of mental processes. The additional elements that are present do not integrate the recited judicial exception into practical application (Step 2A, Prong Two: NO), and the claims are directed to the judicial exception. (Step 2A; YES) Step 2B: Step 2B of the eligibility analysis evaluates whether the claim as a whole amount to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As explain in Step 2A, claim 1 contain additional elements that relate to the acquiring of data and the processor and memory are merely applied to the judicial exception. The considerations of Step 2A Prong 2 and Step 2B overlap, but differ in that Step 2B requires the consideration of the claim as a combination of the limitation, see MPEP 2106.05 subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity takes into account whether or not the activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). The recitation of receiving, presenting, and transmitting of images or data in claims 1, 3, 5, 7, 9, 12, 13, and 15 are recited at a high level of generality and amounts to receiving data, which is well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. The limitations remain insignificant extra-solution activity even upon reconsideration. Claim 2-9 and 11-13 recite the use generic computer elements without additional elements beyond insignificant extra solution activity. Claim 14 does not include additional elements beyond abstract ideas of mental processes. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. (Step 2B: NO). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4-5, and 12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim US 20170111569 (hereinafter “Kim”). Regarding claim 1, Kim teaches a display device comprising (see paragraph 0045, the electronic device 100 may show an image through a display): a display (see paragraph 0046 and Figure 1, display 170); communication circuitry configured to communicate with an external device (see paragraph 0142, the input/output interface to transfer instructions or data from an external device to other elements); and at least one memory storing one or more instructions (see paragraph 0053 and Figure 1, memory 130 may store instructions); and at least one processor configured to execute the one or more instructions (see paragraph 0053 and Figure 1, processor 150 executes the instructions), wherein the one or more instructions, when executed by the at least one processor, cause the display device to (see paragraph 0053, the processor executes the instructions to perform the function associated with face detection): display an image on the display based on image information received through the communication circuitry (see paragraph 0142-0143 and Figure 14, the input/output interface [communication circuitry] may transfer data input from the external device to other elements of the electronic device 1401, including display 1460 which displays images), PNG media_image1.png 434 648 media_image1.png Greyscale identify, in the image, one or more feature points of a patient and an omega shape of the patient (see paragraph 0077 and 0079, feature point extracting unit which may extract corner or boundary point of each object [interpreted as the patient] as feature points from the image. Shape detection unit may determine whether an omega shape corresponding to a face shape of a person is present), identify, based on the one or more feature points and the omega shape, whether the image includes a predetermined detection area associated with biometric information of the patient (see paragraph 0081 and 0084, the face detection unit 159 determining whether a face of a person [predetermined detection area associated with biometric information] is present in the image through the use of a pattern of feature points and a specified shape [omega shape]), and based on identifying that the image includes the predetermined detection area, adjust a display parameter of at least one of the display or the image (see paragraph 0082, the exposure configuration unit may set the aperture, value, shutter speed, and sensitivity of the image sensor [changing the exposure configuration inherently changes the parameters of the image] configured if the specified shape is detected and based on the state of the feature points). Regarding claim 4, Kim teaches the display device of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the display device to adjust the display parameter of the image based on illuminance information received through the communication circuitry and predetermined reference illuminance information (see paragraph 0190, when the processor executes the instructions, the processor performs the methods and functions of the electronic device. See paragraph 0082-0083, the adjusting of the exposure configuration unit [display parameters] based on a luminance value of the image from the input/output interface [see paragraph 0142, the input/output interface to transfer instructions or data from an external device to other elements]). Regarding claim 5, Kim teaches the display device of claim 1, wherein the one or more instructions, when executed by the at least one processor (see paragraph 0087, when the processor executes the instructions, the processor performs the methods and functions of the electronic device), further cause the display device to control the communication circuitry to transmit environment guide information about illuminance adjustment to the external device (See paragraph 0142, the input/output interface may output data received from other elements of the device to external device), based on illuminance information received through the communication circuitry and predetermined reference illuminance information (see paragraph 0083, the use of the exposure configuration unit which includes the changing of exposure based on a luminance value of an image [the image is obtained through the input/output interface, paragraph 0142] compared to a specific level [predetermined reference luminance]). Claim 12 is analogous method to the apparatus of claim 1, thus is analyzed and rejected similar to claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Chul KR 20190135598 (hereinafter “Chul”). Regarding claim 2, Kim teaches the display device of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the display device to (see paragraph 0190, when the processor executes the instructions, the processor performs the methods and functions of the electronic device): identify a center pole of the image (see paragraph 0086, the classification of the image into regions including a center region), identify, in the image, a left side of the patient and a right side of the patient based on the (see paragraph 0130, the face image is classifying into a right-side surface and left side surface as part of face detection [ determined by whether a pattern corresponding to a face is present (omega shape, 0079), paragraph 0084-0085] and determining the corresponding region [regions determined based on where the feature points are in the detection region, paragraph 0099] of face detection), identify whether a (see paragraph 0099, determine a region that feature points are present in), identify whether a (see paragraph 0099, determine a region that feature points are present in), and Kim does not teach identify sides of the patient based on the center pole, identify whether a left feature point on the left side of the patient, identify whether a right feature point on the right side of the patient, and based on identifying that only one of the left feature point and the right feature point is included in the one or more feature points, map a feature point from among the left feature point and the right feature point that is included in the one or more feature points to a corresponding opposite side of the image. Chul teaches identify, in the image, a left side of the patient and a right side of the patient based on the center pole (see page 7 paragraph 6 and figure 8, the use of a reference line that is a straight line including the center axis), PNG media_image2.png 146 948 media_image2.png Greyscale PNG media_image3.png 495 362 media_image3.png Greyscale identify a left feature point on the left side (see page 7 paragraph 6, the dividing of each feature point into a left face group), identify a right feature point on the right side of the patient (see page 7 paragraph 6, the dividing of each feature point into a right face group), and based on identifying that only one of the left feature point and the right feature point is included in the one or more feature points, map a feature point from among the left feature point and the right feature point that is included in the one or more feature points to a corresponding opposite side of the image (see page 7 paragraph 6 and Figure, the feature point pair which includes a feature point belong to the left face group and a feature point belonging to the right face group, with a feature point of the right face group corresponding to a feature point in the left face group, seen mapped in figure 8). Kim and Chul are analogous art because they are from the same field of endeavor of a device to analyze the face of a user through an image that can be from a variety of environment by the process of feature point extraction for analyses. Before the effective filling date of the invention, it would have been obvious to one of ordinary skill in the art to modify Kim to identifying of right and left feature points and mapping feature points as taught by Chul. The motivation for doing so would have been to allow for comparing the right and left side of the user (Chul, page 3 paragraph 10). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Sung KR 20210025847 (hereinafter “Sung”). Regarding claim 3, Kim teaches the display device of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the display device to (see paragraph 0190, when the processor executes the instructions, the processor performs the methods and functions of the electronic device): based on identifying that at least one of the omega shape or the one or more feature points of the patient is not included in the image, control the communication circuitry to transmit, to the external device, environment guide information, and (see paragraph 0082-0083, the determination on where there are less feature points that specified, and changing the exposure configuration. See paragraph 0142, the input/output interface may output data to an external device from other elements of the electronic device [exposure configuration 157, see Fig 1 and 4]) Kim does not teach the environment guide information comprises information related to an adjustment of a location of the patient and a posture of the patient. Sung teaches control the communication circuitry to transmit, to the external device, environment guide information (see page 9 paragraph 11, the use of the controller 140 [which has direct connection to the communication unit 110 to transmit data, page 6 paragraph 3] to display the photographing guide information [environment guide information]), and PNG media_image4.png 146 940 media_image4.png Greyscale wherein the environment guide information comprises information related to an adjustment of a location of the patient and a posture of the patient (see page 9 paragraph 11-12, the photographing guide information may be display in cases of the user’s posture and contain a guide line of adjusting the position or direction of the face). Kim and Sung are analogous art because they are from the same field of endeavor of a device for using an image of a user to determine biometric information. Before the effective filling date of the invention, it would have been obvious to one of ordinary skill in the art to modify Kim to adjust the position and posture of a user as taught by Sung. The motivation for doing so would have been to satisfy the condition for obtaining a basic user image by adjusting the user (Sung, page 9). Claims 6 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Su US 20180084989 (hereinafter “Su”). Regarding claim 6, Kim teaches the display device of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the display device to (see paragraph 0087, when the processor executes the instructions, the processor performs the methods and functions of the electronic device): based on identifying that a brightness of the image is non-uniform, (see paragraph 0077, the use of variation level of luminance values to process the image, the variation level of luminance is interpreted as non-uniform brightness). Kim does not teach perform pre-processing on the image to make the brightness of the image uniform. Su teaches perform pre-processing on the image to make the brightness of the image uniform (see paragraph 0059, instruction may instruct to adjust the uniformity of image brightness of a single picture to form uniform brightness). Kim and Su are analogous art because they are from the same field of endeavor of a device for using an image of a user and analyzing the illumination of the image for object detection. Before the effective filling date of the invention, it would have been obvious to one of ordinary skill in the art to modify Kim to adjust the image to have uniform brightness as taught by Su. The motivation for doing so would have been to aid in producing a single clear image (Su, paragraph 0059 and 0061). Regarding claim 8, Kim teaches the display device of claim 1, wherein the one or more instructions, when executed by the at least one processor, further cause the display device to (see paragraph 0190, when the processor executes the instructions, the processor performs the methods and functions of the electronic device): based on identifying the omega shape and not identifying the one or more feature points, identify whether the image information was obtained in a backlit environment based on brightness information of the image information (see paragraph 0086, the processor includes a backlight determining unit which uses a luminance characteristic value to compare regions of the image. The regions are set by the shape detecting unit which is based on detecting an omega shape, see paragraph 0080), and based on identifying that the image information was obtained in the backlit environment, adjust the display parameter of the image and (see paragraph 0052, if it is determined that the image is in a backlight condition the electronic device may change the exposure configuration). Kim does not teach perform pre-processing on the image to make a brightness of the image uniform. Su teaches perform pre-processing on the image to make a brightness of the image uniform (see paragraph 0059, instruction may instruct to adjust the uniformity of image brightness of a single picture to form uniform brightness). Claims 7, 10, 11, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Kitajima US20220300350 (hereinafter “Kitajima”). Regarding claim 7, Kim teaches the display device of claim 1, wherein the at least one memory further stores reference biometric information of the patient (see paragraph 0160 and Figure 15, the electronic device includes a biometric sensor 1540I and memory 1530 that are connected through the processor), and wherein the one or more instructions, when executed by the at least one processor (see paragraph 0190, when the processor executes the instructions, the processor performs the methods and functions of the electronic device), detect the biometric information of the patient (see paragraph 0160, the inclusion of a biometric sensor in the sensor module), control the communication circuitry to transmit the obtained (see paragraph 0142, the input output may output data from other elements of the electronic device to the external device). Kim does teach display device to, based on receiving medical care request information through the communication circuitry: detect the biometric information of the patient from the predetermined detection area obtain diagnosis result information based on the detected biometric information and the stored reference biometric information, and transmit the obtained diagnosis result information. Kitajima teaches display device to, based on receiving medical care request information through the communication circuitry (see paragraph 0022-0023, the user selects the vital sign measurement application [request medical care information] the system controller executes [the system controller is a processor which receive the data through the communication unit, figure 1]): detect the biometric information of the patient from the predetermined detection area (see paragraph 0022-0023, measuring biological information [can include vital signs] of the image of the user which includes acquisition regions [paragraph 0050]) obtain diagnosis result information based on the detected biometric information and the stored reference biometric information (see paragraph 0022, the biological information is analyzed, and the measurements are displayed on the display unit [image memory 106 stores image data of the display unit 109, paragraph 0020]), and transmit the obtained diagnosis result information (see paragraph 0023 and figure 2B, the displaying of the measurements). Kim and Kitajima are analogous art because they are from the same field of endeavor of a device for using face detection device with the use detection areas and illumination suitability. Before the effective filling date of the invention, it would have been obvious to one of ordinary skill in the art to modify Kim to detect biological information when a request for medical care is received as taught by Kitajima. The motivation for doing so would have been to allow for the user to select various applications of biological information to collect (Kitajima, paragraph 0037). Regarding claim 10, Kim teaches the display device of claim 1. Kim teaches the one or more feature points comprises one or more feature points of a face of the patient (see paragraph 0077 and 0102, the feature point extracting unit may extract corner or boundary pints of each object as the feature point of the image, feature points may be included in the eyes, a noise, and a mouth of a face), wherein the one or more feature points of the face comprise a pair of left eye endpoints, a pair of right eye endpoints, a pair of mouth endpoints, and a nose tip point (see paragraph 0077 Figure 11, the feature point extracting unit may extract corner or boundary pints of each object as the feature point of the image, feature points may be included in the eyes, a noise, and a mouth of a face. Figure 11 shows the first image 1110 with feature points 1101 which have multiple feature points along the outline right and left eye and the mouth [pair of feature points], additional shows a feature point on the nose.), and wherein the predetermined detection area comprises an orbital area of the face (see paragraph 102, the detection of feature points in a region corresponding to eyes [orbital region]). Kim does not teach the predetermined detection area comprises an orbital area of the face and an infraorbital area of the face. Kitajima teaches the predetermined detection area comprises an orbital area of the face and an infraorbital area of the face (see paragraph 0051-0052, and Figure 5, the setting of facial region frames around the right 501 and left cheek 502 [infraorbital area of the face] which are used for evaluation). Regarding claim 11, Kim teaches the display device of claim 1. Kim teaches the one or more instructions, when executed by the at least one processor, further cause the display device to (see paragraph 0190, when the processor executes the instructions, the processor performs the methods and functions of the electronic device): (see paragraph 0089 and 0094, the obtaining of a first image using a first exposure configuration and the obtaining of a second image using a second exposure configuration [changing exposure directly changes the illumination]), the first image information and the second image information being received from the external device (see paragraph 142, the input/output interface transfers data input from an external device), and transmit illuminance information (see paragraph 142, the input/output interface transfers data from elements of the electronic device [includes an illumination sensor, Fig 15] to an external device). Kim does not teach obtain a skin reflection coefficient of the patient, and illuminance information corresponding to the skin coefficient. Kitajima teaches obtain a skin reflection coefficient of the patient (see paragraph 0061, the use of specular reflection based on the ratio of specular reflection pixels in the facial region presented as a value), and illuminance information corresponding to the skin coefficient (see paragraph 0065, environmental suitability information that includes illumination values and specular reflection values). Claim 15 is analogous method to the apparatus of claim 11, thus is analyzed and rejected similar to claim 11. Examiner’s note Please note that no prior art rejection has been made to claims 9, 13, and 14. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see the attached 892 notice of reference cited. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMILY R. HAUK whose telephone number is (571)272-5966. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMILY HAUK/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Apr 11, 2024
Application Filed
Mar 16, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month