Prosecution Insights
Last updated: April 19, 2026
Application No. 18/471,981

VARIABLE EFFECTS ACTIVATION IN AN INTERACTIVE ENVIRONMENT

Non-Final OA §102§103
Filed
Sep 21, 2023
Examiner
PARK, SANGHYUK
Art Unit
2623
Tech Center
2600 — Communications
Assignee
Universal City Studios LLC
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
509 granted / 717 resolved
+9.0% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
742
Total Applications
across all art units

Statute-Specific Performance

§101
0.8%
-39.2% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
25.9%
-14.1% vs TC avg
§112
16.4%
-23.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 717 resolved cases

Office Action

§102 §103
Detailed Action Response to Amendment The amendment filed on 12/18/2025 has been entered and considered by the examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 39, 40 and 43 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Han et al (PGPUB 2021/0366506 A1). As to claim 39, Han (Figs. 5, 7) teaches, an interactive effect system (artificial intelligence device 100, Fig. 7) comprising: one or more sensors (camera in ¶ 55, microphone 122 in ¶ 56, and ultrasonic sensor, infrared sensor, laser sensor in ¶ 312-314) configured to monitor a plurality of interactive objects (user, artificial intelligence device 100-2 and 100-3 such as air purifier or robot cleaner in the environment) in an interactive environment (¶ 230-236, 247, 250); and a controller (processor 180), wherein the controller comprises command logic (software processing)(¶ 58, 59) configured to: receive, via a first sensor of the one or more sensors (microphone 122 on TV as shown in Fig. 7), a first input (i.e. voice command) from a first interactive object (male user) of the plurality of interactive objects and a second input (i.e. voice command) from a second interactive object (female user)of the plurality of interactive objects (¶ 157, 158: i.e. Han’s invention is capable of differentiating between different users); receive, via a second sensor (microphone on artificial intelligence device 100-2, ¶ 177) of the one or more sensors, first identification information (i.e. voice command) of a first guest (male user) from the first interactive object and second identification information (i.e. voice command) of a second guest (female user) from the second interactive object (¶ 157, 158: i.e. voice commands within different frequency band range for different users); identify the first input from the first interactive object as valid input (i.e. within range of volume and frequency band spectrum) for activating the interactive effect based on receiving the first identification information of the first guest (¶ 157, 185, 248); generate instructions to control an interactive effect based on identifying the first input as valid input for activating the interactive effect and the first identification information of the first guest (¶ 157: i.e. based on identification of male/female user and based on identification of voice command, the devices are controlled)(¶ 157-161); and cause activation of the interactive effect based on the instructions (i.e. perform operations on the devices such as such as cleaning living room)(¶ 227). As to claim 40, Han (Fig. 5) teaches, wherein the first interactive object is associated with the first guest, and wherein the first identification information comprises a first guest profile associated with the first guest (¶ 88, 97: smart card, finger scan sensor, UIM, SIM, USIM for storing information for granting authority, ¶ 83: i.e. personalization database). As to claim 43, Han (Fig.1) teaches, wherein the first identification information associated with the first interactive object is received based on a first radio frequency (RF) signal from a radio frequency identification (RFID) tag associated with the first interactive object (¶ 50: i.e. RFID). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21-23, 25-28, 30-35, 37, 41 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han et al in view of Bedikian et al (PGPUB 2014/0201666 A1). As to claim 21, Han (Figs. 5, 7) teaches, a variable interactive effect system (artificial intelligence device 100, Fig. 7) for controlling an interactive effect (i.e. notification output from corresponding device as shown in Fig. 7 in response to user input)(¶ 226, 234), the system comprising: one or more sensors (camera in ¶ 55, microphone 122 in ¶ 56, and ultrasonic sensor, infrared sensor, laser sensor in ¶ 312-314) configured to monitor one or more interactive objects (user, artificial intelligence device 100-2 and 100-3 such as air purifier or robot cleaner in the environment) in an interactive environment (Fig. 7) associated with the interactive effect (¶ 230-236, 247, 250); wherein the interactive environment comprise a geographic area (living room / main room in Fig. 7, and appropriate range) associated with activating the interactive effect (¶ 228, 262); at least one controller (processor 180) configured to receive sensor data (i.e. electrical audio data from input unit 120 and sensing unit 140) from the one or more sensors (¶ 52-54, 58, 59), the sensor data signal comprising an indication of a position of an interactive object of the one or more interactive objects within the interactive environment (¶ 326: i.e. detect the position change of the user) and input (¶ 59: i.e. voice command or input from user input unit 123, and ¶ 317: i.e. movement of the user) associated with the interactive object (¶ 59, 317), wherein the at least one controller comprises command logic (software processing, ¶ 59) configured to: identify, using a validation layer (Figs. 5, 11, 14: i.e. the processing method as shown in the figure perform different validations, such as whether volumes are within appropriate range in step S1117, and Fig. 14: i.e. maintain appropriate range of volume when movement is not detected to determine validity of voice command) of the command logic, the input as being valid input for activating the interactive effect based on determining that the position of the interactive object is within the geographic area (i.e. within range) of the interactive environment (Fig. 14: i.e. when movement of the user is detected, operation commands are received in S1407 and the operation is performed in S1413), ¶ 244: i.e. appropriate range of volume may be required for activation state change) generate instructions to control the interactive effect based on the input being identified as valid input for activating the interactive effect (¶ 374: i.e. perform specific command, Fig. 11: i.e. S1119 determines the artificial intelligence device as selected object to be controlled, and Fig. 14: i.e. adjust appropriate range of volumes if the movement of the user is detected), cause activation of the interactive effect based on the instructions (Fig. 18: i.e. perform operation corresponding to second control command S1825, and Fig. 20 teaches different interactive effects such as turning on air cleaner, gathering weather information, and going to power mode, weather in each of t1, t2, t3). Han teaches determining whether the artificial intelligence devices are within range, and using the echo time to determine whether there is an obstacle detected. However, Han does not specifically teach determining that the position of the interactive objective is within the geographic area of the interactive environment for at least a threshold amount of time. Bedikian (Figs. 1A and 3C) teaches, identify, using a validation layer (steps 354 – 362 in Fig. 3C) of the command logic (method 350)(¶ 84), the input (i.e. user gesture) as being valid input (completion 100%) for activating the interactive effect (i.e. system’s response to user’s gesture such as speed of scrolling) based on determining that the position of the interactive object (i.e. user and user’s arm for creating gesture as shown in Fig. 1A) is within the geographical area (i.e. within field of camera within the real world)(¶ 92) of the interactive environment (real world) for at least a threshold amount of time (minimum/maximum duration of gesture)(i.e. Bedikian’s invention determines a proper user-defined gesture when multiple requirements are met. ¶ 52 describes that the motion capture must be performed within a camera field of view, which requires “within the geographical area”. ¶ 104 describes that piercing a plane can be interpreted as a gesture, which describes “within the geographical area”. ¶ 97 describes temporal requirement can be set for determining minimum or maximum duration of the gesture. In other words, Bedikian teaches an embodiment that generates a valid input by determining that a gesture is performed within minimum and maximum duration threshold within the field of view of the camera). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bedikian’s teaching of controlling the speed (or response) of a system in response to the intensity level of the user’s gesture (or voice command, ¶ 67) into Han’s interactive device, so as to improve user convenience (¶ 122). As to claim 22, Han (Fig. 1) teaches, wherein the one or more sensors comprise image sensors, radio frequency (RF) sensors (¶ 56, microphone for acoustic signal is within 20Hz to 20kHz, and ¶ 313: i.e. ultrasonic sensor, which would operate frequency higher than 20kHz), optical sensors, or any combination thereof (i.e. Applicant uses “or” clause and alternative language for claim 22, Han teaches the radio frequency sensor, such as microphone and ultrasonic sensor, and teaches the claim language). As to claim 23, Han (Figs. 8, 11, 14, and 18) teaches, wherein sensor data comprises identification information (identification information, such as model name for the devices and via UIM/SIM/USIM/smart card/biometric sensor for the user) associated with the interactive object (¶ 88, 97, 402, 403), and wherein the at least one controller is configured to identify the input as valid input for activating the interactive effect based at least on the identification information (¶ 97: i.e. granting use authority of artificial intelligence device). As to claim 25, Han teaches the claim of 21 but does not specifically teach the threshold amount of time. Bedikian (Figs. 1A, 3C) teaches, wherein the input is identified as being valid input for activating the interactive effect based on the position of the interactive object indicating that a guest associated with the interactive object is within the geographic area (¶ 52: i.e. within field of view of the camera) of the interactive environment for the threshold a predetermined of time (¶ 97: i.e. temporal requirement including minimum or maximum duration of the gesture). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bedikian’s teaching of controlling the speed (or response) of a system in response to the intensity level of the user’s gesture (or voice command, ¶ 67) into Han’s interactive device, so as to improve user convenience (¶ 122). As to claim 26, Han (Fig. 7) teaches, wherein the input is identified as being valid input for activating the interactive effect based on the position of the interactive object indicating that a guest associated with the interactive object is stationary within the geographic area of the interactive environment (¶ 317: i.e. no movement is detected, appropriate range of volume is maintained for audio detection) However, Han does not specifically teach predetermined amount of time. Bedikian (Figs. 1A, 3C) teaches, using predetermined amount of time as a requirement for determining valid input (¶ 97: i.e. minimum and maximum duration of the gesture). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bedikian’s teaching of controlling the speed (or response) of a system in response to the intensity level of the user’s gesture (or voice command, ¶ 67) into Han’s interactive device, so as to improve user convenience (¶ 122). As to claim 27, Han (Fig. 6) teaches, wherein the input comprises audio information (utterance or speech noise) associated with a guest (user) associated with the interactive object and wherein the input is identified as being valid input for activating the interactive effect based on detecting a valid word (key word or predetermined noise pattern) or phrase in the audio information (¶ 184: i.e. wake-up command based on extraction of keyword speech, ¶ 190: i.e. classify the noise as a speech noise when the acoustic signal matches predetermined noise pattern by a certain ratio or more). As to claim 28, Han (Figs. 16-18) teaches, wherein the input comprises, motion information (i.e. detecting walking of user as shown in Fig. 15 or distance of the AI devices as shown in Figs. 7 and 17) associated with the interactive object, and wherein the input is identified as being valid input for activating the interactive effect based on an the motion information, being within a predetermined motion pattern (walking) for activating the interactive pattern (i.e. position information based on volume change and volume measured by different devices), the motion information, exceeding a predetermined intensity threshold value (i.e. measured distance and whether measured speech level is within limits or not)(¶ 316: i.e. processor determines the distance between the user and the artificial intelligence device, ¶ 319: i.e. appropriately adjust volume according to the distance, ¶ 330-333: i.e. adjusted volume limits, Fig. 7, ¶ 229: i.e. AI device 100-3 is far from the user not to receive or recognize the speech). As to claim 30, Han (Figs. 5, 7) teaches, a method of controlling an interactive effect (artificial intelligence device 100, Fig. 7), the method comprising: receiving a plurality of inputs (identification information of user in ¶ 88, devices in ¶ 403, and wake-up command in S505, S511, ¶199-208, and operation command S515) associated with a guest (user) from a sensor (camera in ¶ 55, microphone 122 in ¶ 56, and ultrasonic sensor, infrared sensor, laser sensor in ¶ 312-314, finger scan sensor and biometric sensor in ¶ 88) configured to monitor an interactive environment (Fig. 7) associated with the interactive effect (Figs. 1 and 7); determining that a first input (wake-up command) of the plurality of inputs matches a prompt input (¶ 166: i.e. extract topic uttered, 184, 199-208: i.e. ¶ 184 specifically teaches keyword speech of wake-up command); identifying that a second input (operation command S515) of the plurality of inputs passes as valid input (i.e. analyze intention) for activating the interactive effect based on determining that the first input matches the prompt input (Fig. 5: i.e. device must be pass through wake-up phase before operation command phase), the first input being received at a first time and the second input being received at a second time subsequent the first time (Fig. 5: i.e. S515-519 are later stage than S505 and S511)(¶ 199-217); and transmitting instructions (transmit operation commands in S521) to control the interactive effect based on identifying the second input as being the valid input for activating the interactive effect (¶ 220); wherein the instructions are configured to cause activation of the interactive effect based on the second input (¶ 220, 221) Han does not specifically teach the instructions are configured to cause activation of the interactive effect at an intensity score corresponding to a guest input intensity of the second input. Bedikian (Figs. 1A and 3C) teaches, transmitting instructions to control the interactive effect based on identifying the second input as being the valid input for activating the interactive effect, wherein the instructions are configured to cause activation of the interactive effective at an intensity score (different intensity) corresponding to a guest input intensity of the second input (¶ 105: i.e. distance beyond the plane is interpreted as an intensity level of gesture, ¶ 125: i.e. intensity of gesture results in intensity of device response)(¶ 125). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bedikian’s teaching of controlling the speed (or response) of a system in response to the intensity level of the user’s gesture (or voice command, ¶ 67) into Han’s interactive device, so as to improve user convenience (¶ 122). As to claim 31, Han (Fig. 1) teaches, wherein the sensor comprises an image sensor, a radio (RF) frequency sensor, (¶ 56, microphone for acoustic signal is within 20Hz to 20kHz, and ¶ 313: i.e. ultrasonic sensor, which would operate frequency higher than 20kHz), optical sensors, or any combination thereof (i.e. Applicant uses “or” clause and alternative language for claim 22, Han teaches the radio frequency sensor, such as microphone and ultrasonic sensor, and teaches the claim language). As to claim 32, Han (Figs. 8, 11, 14, and 18) teaches, assessing the guest input intensity of the second input according to one or more intensity metrics (¶ 329: i.e. appropriate volume or acoustic intensity is within 40 to 60) to determine an intensity score (i.e. volume), wherein the one or more intensity metrics comprise a voice intensity (i.e. volume of the speech), a gesture velocity, a gesture trajectory, or a combination thereof (¶ 325: i.e. perform operation command when the speech signal is within the appropriate range. Applicant uses “or” clause and alternative language in claim 23. Han teaches the voice intensity in claim 23 and teaches the claim limitation). Han does not specifically teach a gesture velocity. Bedikian (Figs. 1A, 3C) teaches, assessing the guest input intensity (degree of piercing) of the second input according to one or more intensity metrics (¶ 125: i.e. distance beyond the virtual surface) to determine an intensity score (i.e. intensity level), wherein the one or more intensity metrics comprise a gesture velocity (¶ 125: i.e. intensity of engagement, ¶ 130: i.e. faster the palm travels), a gesture trajectory (degree of piercing), or a combination thereof (¶ 125: i.e. based on the gesture movement and piercing, the responsive action of the device is performed at stronger intensity. For example, the speed of scrolling may be determined based on the intensity of the user’s gesture). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bedikian’s teaching of controlling the speed (or response) of a system in response to the intensity level of the user’s gesture (or voice command, ¶ 67) into Han’s interactive device, so as to improve user convenience (¶ 122). As to claim 33, Han (Figs. 7, 11, 14, 18) teaches, adjusting the interactive effect to a first level (i.e. receiving and processing user’s voice input) of activation of the interactive effect associated with the intensity score (i.e. volume of the speech) when the second input matches at least one preprogrammed interaction of the interactive effect (¶ 74, 190: i.e. speech pattern is determined based on matching a predetermined noise pattern by a certain ratio or more when the audio input is within appropriate range volume) and adjusting the interactive effect to a second level (i.e. treating received audio as ambient noise) of activation of the interactive effect associated with the determined intensity level when the second input does not match any preprogrammed interaction of the interactive effect (¶ 191: i.e. determines whether the detected speech pattern matches a predetermined ambient noise, or ¶ 186: i.e. detected speech signal is not within appropriate range, which may be removed via algorithm as noise, ¶ 56, 139, ), wherein the first level of activation of the interactive effect is at a different intensity than the second level of activation of the interactive effect (¶ 186, 190: i.e. speech signal may be within or not within the appropriate range of volume). As to claim 34, Han (Figs. 15, 16) teaches , wherein the interactive effect is within an interactive environment and the first input is associated with a guest within the interactive environment (Fig. 5), and the method further comprises: receiving a potential input range (volume range of speech signal 1601 and upper/lower limits) associated with guest based on the first input (¶ 330); determining an intensity scale (Figs. 16-18: i.e. audio waveforms) for the guest based at least in part on the potential input range (¶ 340: i.e. determine whether the measured volume is within the appropriate range S1411); and determining that the second input is valid for activating the interactive effect based on an intensity level associated with the first input being the intensity scale (i.e. new adjusted range of 25-40 is shown in Fig. 16) for the guest (¶ 332: i.e. adjust the upper limit value and the lower limit value despite volume scale being changed due to user’s movement). Han does not specifically teach determining that the second input is valid input for activating the interactive effect based on the additional intensity score associated with the first input being within the intensity scale for the guest. Bedikian (Figs. 1A, 3C) teaches, determining that the second input is valid input for activating the interactive effect based on the additional intensity score associated with the first input being within the intensity scale for the guest (i.e. piercing and intensity of gesture, such as how fast palm moves, can be two different and additional scores, ¶ 125: degree of piercing / intensity of gesture, and ¶ 130: i.e. how fast palm moves). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bedikian’s teaching of controlling the speed (or response) of a system in response to the intensity level of the user’s gesture (or voice command, ¶ 67) into Han’s interactive device, so as to improve user convenience (¶ 122). As to claim 35, Han (Figs. 15, 16) teaches, wherein the first input comprises voice volume (i.e. volume of speech) or intensity data (i.e. volume within appropriate range or not)(¶ 34), and wherein the potential input range comprises an estimated volume range for the guest (¶ 340, Fig. 15: i.e. the volume measured is for the particular user as shown in Fig. 15). As to claim 37, Han teaches the method of claim 34 but does not specifically teach estimated potential input range comprise an estimated range of motion for the guest. Bedikian (Figs. 1A, 3C) teaches, wherein the first input comprises movement data of the guest (gesture) or a guest-associated object (i.e. body part for performing the gesture), and wherein the potential input range comprises an estimated range of motion for the guest (i.e. minimum and maximum length)(¶ 97). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Bedikian’s teaching of controlling the speed (or response) of a system in response to the intensity level of the user’s gesture (or voice command, ¶ 67) into Han’s interactive device, so as to improve user convenience (¶ 122). As to claim 41, Han (Fig. 1) teaches, wherein the identification information associated with the interactive object is received based on a radio frequency (RF) signal from a radio frequency identification (RFID) tag associated with the interactive object (¶ 50: i.e. RFID). Claim(s) 44 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han in view of Holt et al (USPAT 6,446,865 B1). As to claim 44, Han (Figs. 1, 16-18) teaches, wherein the one or more sensors comprise a camera (camera 121)(¶ 51) but does not specifically teach the retroflective marker. Holt (Figs. 1 and 2) teaches, wherein the camera (camera 20) is configured to detect a first position (position) of the first interactive object (i.e. badge 24 for one user), and to detect a second position (i.e. another position) of the second interactive object (i.e. badge 24 for another user) based on detecting a second retroflective marker (retroreflective pattern 30) associated with the second interactive object (col. 6 lines 6-36). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Holt’s retroreflective material marker into Han’s environment system, so as to provide a method for accurate automatic scan system (col. 4 lines 19-23). Claim(s) 42 is/are rejected under 35 U.S.C. 103 as being unpatentable over Han and Bedikian as applied to claim 21 above, and further in view of Holt et al (USPAT 6,446,865 B1). As to claim 42, Han and Bedikian teach the variable interactive effective system of claim 21, but do not specifically teach retroflective marker. Holt (Figs. 1 and 2) teaches, wherein the one or more sensors comprise a camera (camera 20), and wherein the camera is configured to detect the position of the interactive object (badge 24) based on detecting a retroflective marker (retroreflective pattern 30)) associated with the interactive object within the interactive environment (col. 6 lines 6-36). It would have been obvious to a person of ordinary skilled in the art before the effective filing date of the claimed invention to incorporate Holt’s retroreflective material marker into Han’s environment system as modified with the teaching of Bedikian, so as to provide a method for accurate automatic scan system (col. 4 lines 19-23). Response to Arguments Applicant’s arguments with respect to claim(s) 21, 22, 23, 25-28, 30-35, 37, 41 and 42. have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant's arguments filed 12/18/2025 have been fully considered but they are not persuasive for claims 39. Applicant has amended claim 39 to recite the new limitations regarding the first input from a first interactive object and the second input from a second interactive object. Further, the new limitations require about the first identification information and identifying the first input. The limitations recite, “receive, via a first sensor of the one of more sensors, a first input from a first interactive object of the plurality of interactive objects and a second input from a second interactive object of the plurality of interactive objects; receive via a second sensor of the one or more sensors, first identification of a first guest from the first interactive object and second identification information of a second guest from the second interactive object; identify the first input from the first interactive object as valid input for activating the interactive effect based on receiving the first identification information of the first guest”. Examiner carefully reconsidered Han prior art and has determined that the prior art would still teach the limitations based on new ground of rejection. On ¶ 157, 158, Han teaches identifying the user based on the frequency band range detected by the microphone from the speech. Based on the frequency band range, the user may be identified as a male user or a female user. Further, Han teaches different attributes of the user’s utterance can be used to detect user’s voice command. The different attributes are different from the frequency band range and include utterance speed, tone, and topic, which can be used to validate the validity of input. Therefore, Examiner believes Han prior art teaches claim 39. Please, refer to the discussion above for further detail. Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGHYUK PARK whose telephone number is (571)270-7359. The examiner can normally be reached on 10:00AM - 6:00 M-F. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached on ((571) 272-7772. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at (866) 217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call (800) 786-9199 (IN USA OR CANADA) or (571) 272-1000. /SANGHYUK PARK/Primary Examiner, Art Unit 2623
Read full office action

Prosecution Timeline

Sep 21, 2023
Application Filed
Dec 28, 2023
Response after Non-Final Action
Mar 14, 2025
Non-Final Rejection — §102, §103
Jun 04, 2025
Examiner Interview Summary
Jun 04, 2025
Applicant Interview (Telephonic)
Jul 21, 2025
Response Filed
Oct 18, 2025
Final Rejection — §102, §103
Dec 15, 2025
Applicant Interview (Telephonic)
Dec 18, 2025
Response after Non-Final Action
Jan 22, 2026
Request for Continued Examination
Jan 29, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602134
ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12603055
DISPLAY DEVICE INCLUDING A SWEEP DRIVER THAT PROVIDES A SWEEP SIGNAL, AND ELECTRONIC DEVICE INCLUDING THE DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12594141
SYSTEMS, METHODS, AND MEDIA FOR PRESENTING BIOPHYSICAL SIMULATIONS IN AN INTERACTIVE MIXED REALITY ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12591322
TOUCH INPUT SYSTEM INCLUDING PEN AND CONTROLLER
2y 5m to grant Granted Mar 31, 2026
Patent 12592207
GATE LINE DRIVING CIRCUIT WITH TOP GATE AND BOTTOM GATE
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
88%
With Interview (+16.5%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 717 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month