Prosecution Insights
Last updated: April 19, 2026
Application No. 18/784,932

PROJECTION SYSTEM, TERMINAL DEVICE, PROJECTION DEVICE AND CONTROL METHOD THEREOF

Non-Final OA §102§103
Filed
Jul 26, 2024
Examiner
SHAH, PRIYANK J
Art Unit
2626
Tech Center
2600 — Communications
Assignee
Coretronic Corporation
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
86%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
392 granted / 584 resolved
+5.1% vs TC avg
Strong +18% interview lift
Without
With
+18.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
17 currently pending
Career history
601
Total Applications
across all art units

Statute-Specific Performance

§101
1.4%
-38.6% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
26.5%
-13.5% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 584 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 2. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. 3. Claim(s) 22 and 24 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lin et al. (US 2020/0105258 A1, hereinafter referred as “Lin”). Regarding claim 22, Lin discloses a projection device (106a), comprising: a projection module (106a); a processor coupled to the projection module (Fig. 1 and ¶0049 discloses the controller 106a1 includes, for example, a central processing unit (CPU), or another programmable microprocessor); a communication interface (110) coupled to the processor and configured to connect to a cloud server (104a) (Fig. 1 and ¶0032 discloses an intelligent voice system 100 includes a voice assistant 102a, a cloud service platform 104a, a projector 106a, and a management server 108. The above devices may all be connected to each other via a network (Internet) 110), wherein the processor is configured to: send an original instruction to the cloud server (104a) through the communication interface (¶0034 and ¶0036 discloses the voice assistant 102a transmits the voice signal to the cloud service platform 104a; and ¶0084 discloses voice assistant 102a is integrated into the projector 106a); in response to the original instruction corresponding to an operation of the projection device (106a), receive a projector control code sent by the cloud server (104a) through the communication interface (110) (¶0081-¶0083 discloses based on the first control command CMD1, the cloud service server 710 includes a first semantic analyzing program which inherently includes a control code acquires, retrieves, or generates the corresponding second control command CMD2 and relays the second control command CMD2 to the management server 108); and control the projection module according to the projector control code (¶0083 discloses the cloud service server 710 may access the projector 106a in response to the first alias AL1 and adjust the projector 106a as the first operating state according to the second control command CMD2 corresponding to the first control command CMD1). Regarding claim 24, Lin discloses the projection device (106a) as claimed in claim 22, wherein the processor is configured to: send the original instruction to the cloud server (104a) through the communication interface (110) (¶0036 discloses when the voice assistant 102a receives the voice signal for powering on the Company A projector, the voice assistant 102a transmits the voice signal to the cloud service platform 104a); and receive feedback information sent by the cloud server (104a) through the communication interface (110) (¶0046 discloses if the second control command CMD2 corresponding to the first control command CMD1 (e.g., turning on the light) input by the user is not present on the cloud service platform 104a, the cloud service platform 104a may control the voice assistant 102a to output/reply a response sentence of recognition failure (e.g., ‘Sorry, I don't know what you mean.’)). Claim Rejections - 35 USC § 103 4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claim(s) 1, 3-5, 11 and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Zhang (US 11,206,372 B1, hereinafter referred as “Zhang”). Regarding claim 1, Lin discloses a control method for a projection device (106a) (title discloses Method for Controlling Projector), comprising: sending an original instruction to a cloud server (104a) through a terminal device (102a) (¶0036 discloses when the voice assistant 102a receives the voice signal for powering on the Company A projector, the voice assistant 102a transmits the voice signal to the cloud service platform 104a); inputting the original instruction… through the cloud server (104a) (¶0011 and ¶0044 discloses the cloud service platform 104a may analyze the first control command CMD1 according to the first semantic analyzing program); in response to the original instruction corresponding to an operation of the projection device (106a), generating a standard instruction according to the original instruction (¶0044 discloses the cloud service platform 104a may analyze the first control command CMD1 according to the first semantic analyzing program, acquire/retrieve or generate the corresponding second control command CMD2 according to the first control command CMD1)… and receiving the standard instruction through the cloud server (104a) to control the projection device (106a) according to the standard instruction (¶0045 discloses the cloud service platform 104a may transmit … the second control command CMD2 corresponding to ‘power on’ to the management server 108; ¶0047 discloses the management server 108 may… adjust the projector 106a as a first operating state (e.g., powering on, powering off, increasing/decreasing brightness, increasing/decreasing contrast ratio, starting OSD, etc.) according to the corresponding second control command CMD2); and in response to the original instruction not corresponding to the operation of the projection device (106a), generating feedback information according to the original instruction… and sending the feedback information to at least one of the terminal device (102a) and the projection device (106a) through the cloud server (104a) (¶0046 discloses if the second control command CMD2 corresponding to the first control command CMD1 (e.g., turning on the light) input by the user is not present on the cloud service platform 104a, the cloud service platform 104a may control the voice assistant 102a to output/reply a response sentence of recognition failure (e.g., ‘Sorry, I don't know what you mean.’)). Lin doesn’t disclose inputting the original instruction into a natural language model and generating a standard instruction and feedback information. However, in the same field of endeavor, Zhang discloses inputting the original instruction into a natural language model (col. 14, lines 35-38 discloses the cloud has powerful computing capabilities and strong scalability, and has … Natural Language Processing (NLP) models) and generating a standard instruction (col. 14, lines 38-42 discloses it can update and optimize various parameters in real time, process the voice analysis and response in real time, and convert the results into executable commands and returned the same to the video conference device 10) and feedback information (col. 14, lines 49-53 discloses the cloud may also synthesize a voice signal and send it to the video conference device 10, so as to for example notify that the user's requirement represented by the voice signal cannot be understood, or make voice responses to some requirements represented by the voice signal). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lin for the purpose of using natural language which reduces the learning curve for non-technical users. Regarding claim 3, Lin discloses the control method for the projection device (106a) as claimed in claim 1, wherein the step of sending the original instruction to the cloud server (104a) comprises: sending the original instruction to the projection device (106a) through the terminal device (102a) to send the original instruction to the cloud server (104a) through the projection device (106a) (¶0084 discloses the voice assistant 102a is integrated into the projector 106a and the projector 106a can thus integrally perform the operations originally respectively performed by the voice assistant 102a and the projector 106a of FIG. 1; and ¶0036 discloses when the voice assistant 102a receives the voice signal for powering on the Company A projector, the voice assistant 102a transmits the voice signal to the cloud service platform 104a). Regarding claim 4, Lin doesn’t disclose the control method for the projection device as claimed in claim 1, wherein the step of inputting the original instruction into the natural language model comprises: inputting the original instruction and a rule instruction into the natural language model through the cloud server. However, in the same field of endeavor, Zhang discloses wherein the step of inputting the original instruction into the natural language model comprises: inputting the original instruction and a rule instruction into the natural language model through the cloud server (col. 14, lines 35-49 discloses the cloud has powerful computing capabilities and strong scalability, and has Automatic Speech Recognition (ASR) models, Natural Language Processing (NLP) models and semantic analysis models; in addition, it can update and optimize various parameters in real time, process the voice analysis and response in real time, and… the cloud may also synthesize a [received] voice signal). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lin for the purpose of using natural language which reduces the learning curve for non-technical users. Regarding claim 5, Lin discloses the control method for the projection device as claimed in claim 4, …sending the feedback information to the at least one of the terminal device (102a) and the projection device (106a) through the cloud server (104a) (¶0046 discloses if the second control command CMD2 corresponding to the first control command CMD1 (e.g., turning on the light) input by the user is not present on the cloud service platform 104a, the cloud service platform 104a may control the voice assistant 102a to output/reply a response sentence of recognition failure (e.g., ‘Sorry, I don't know what you mean.’)). Lin doesn’t disclose wherein the step of in response to the original instruction corresponding to the operation of the projection device comprises: outputting the standard instruction and the feedback information corresponding to the standard instruction according to the original instruction and the rule instruction through the natural language model. However, in the same field of endeavor, Zhang discloses wherein the step of in response to the original instruction corresponding to the operation of the projection device (Fig. 6 and col. 11, lines 34-40 discloses electronic device 30 may send a control command to the video conference device 10) comprises: outputting the standard instruction (col. 14, lines 58-60 discloses an executable command and feedback it to the projection processor 131, so that the projection processor 131 may perform an action matching the executable command) and the feedback information (col. 14, lines 48-51 discloses notify that the user's requirement represented by the voice signal cannot be understood) corresponding to the standard instruction according to the original instruction (col. 13, lines 13-16 disclose “open the projection assembly”, “turn off the projection assembly”. “turn on/off the camera assembly”, “please shut down”, “turn up the volume” and “turn down the volume”) and the rule instruction (col. 15, lines 50-52 discloses the cloud service system 20 is a software service system running in the cloud, which may be configured to provide software services to the video conference device 10) through the natural language model (col. 2, lines 19-21 and col. 14, lines 35-42 discloses the cloud has powerful computing capabilities and strong scalability, and has Automatic Speech Recognition (ASR) models, Natural Language Processing (NLP) models and semantic analysis models… and convert the results into executable commands and returned the same to the video conference device 10). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lin for the purpose of using natural language which reduces the learning curve for non-technical users. Regarding claim 11, Lin discloses the control method for the projection device (106a) as claimed in claim 1, wherein the natural language model is a chatbot (¶0007 discloses an intelligent voice system and a method for controlling a projector by using the intelligent voice system). Regarding claim 25, Lin doesn’t disclose the projection device as claimed in claim 24, wherein the feedback information is generated according to the original instruction and a rule instruction through a natural language model, and the natural language model is stored in the cloud server or connected to the cloud server through a wireless network. However, in the same field of endeavor, Zhang discloses wherein the feedback information (col. 14, lines 48-51 discloses notify that the user's requirement represented by the voice signal cannot be understood) is generated according to the original instruction (col. 13, lines 13-16 disclose “open the projection assembly”, “turn off the projection assembly”. “turn on/off the camera assembly”, “please shut down”, “turn up the volume” and “turn down the volume”) and a rule instruction (col. 15, lines 50-52 discloses the cloud service system 20 is a software service system running in the cloud, which may be configured to provide software services to the video conference device 10) through the natural language model (col. 2, lines 19-21 and col. 14, lines 35-42 discloses the cloud has powerful computing capabilities and strong scalability, and has Automatic Speech Recognition (ASR) models, Natural Language Processing (NLP) models and semantic analysis models… and convert the results into executable commands and returned the same to the video conference device 10), and the natural language model is stored in the cloud server (104a) (col. 14, lines 35-49 discloses the cloud has powerful computing capabilities and strong scalability, and has Automatic Speech Recognition (ASR) models, Natural Language Processing (NLP) models) or connected to the cloud server (104a) through a wireless network. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lin for the purpose of using natural language which reduces the learning curve for non-technical users. 6. Claim(s) 2 and 12-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Zhang and in further view of Faaborg et al. (US 2015/0261496 A1, hereinafter referred as “Faaborg”). Regarding claim 2, Lin discloses the control method for the projection device as claimed in claim 1, further comprising: …using the text instruction as the original instruction (¶0044 discloses convert the first control command CMD1, which is originally a voice signal, into a text file). Lin as modified doesn’t disclose further comprising: converting a speech instruction into a text instruction through the terminal device. However, in the same field of endeavor, Faaborg discloses further comprising: converting a speech instruction into a text instruction through the terminal device (¶0081 discloses computing device 100 may or may not transcribe the speech into text. Computing device 100 may cause one of the display devices, such as presence-sensitive input display 105, projector 120, presence-sensitive display 128, or presence-sensitive display 132 to output a graphical element indicating that audio data is being received). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Lin so that the users audio never leaves the device, reducing risk of misuse. Regarding claim 12, Lin discloses a projection system, comprising: a projection device (106a) (title discloses Method for Controlling Projector); a cloud server (104a); and a terminal device (102a) coupled to the cloud server (104a) and the projection device (106a) (¶0032 discloses an intelligent voice system 100 includes a voice assistant 102a, a cloud service platform 104a, a projector 106a, and a management server 108. The above devices may all be connected to each other via a network (Internet) 110) and configured to send an original instruction to the cloud server (104a) (¶0036 discloses when the voice assistant 102a receives the voice signal for powering on the Company A projector, the voice assistant 102a transmits the voice signal to the cloud service platform 104a), wherein the cloud server (104a) is configured to input the original instruction… (¶0011 and ¶0044 discloses the cloud service platform 104a may analyze the first control command CMD1 according to the first semantic analyzing program), in response to the original instruction corresponding to an operation of the projection device (106a), …generate a standard instruction according to the original instruction (¶0044 discloses the cloud service platform 104a may analyze the first control command CMD1 according to the first semantic analyzing program, acquire/retrieve or generate the corresponding second control command CMD2 according to the first control command CMD1), and the cloud server (104a) is configured to receive the standard instruction to control the projection device (106a) according to the standard instruction (¶0045 discloses the cloud service platform 104a may transmit … the second control command CMD2 corresponding to ‘power on’ to the management server 108; ¶0047 discloses the management server 108 may… adjust the projector 106a as a first operating state (e.g., powering on, powering off, increasing/decreasing brightness, increasing/decreasing contrast ratio, starting OSD, etc.) according to the corresponding second control command CMD2); and in response to the original instruction not corresponding to the operation of the projection device (106a), …generate feedback information according to the original instruction, and at least one of the terminal device (102a) and the projection device (106a) is configured to receive… the feedback information (¶0046 discloses if the second control command CMD2 corresponding to the first control command CMD1 (e.g., turning on the light) input by the user is not present on the cloud service platform 104a, the cloud service platform 104a may control the voice assistant 102a to output/reply a response sentence of recognition failure (e.g., ‘Sorry, I don't know what you mean.’)). Lin doesn’t disclose inputting the original instruction into a natural language model and generating a standard instruction and feedback information; and at least one of the terminal device and the projection device is configured to display the feedback information. However, in the same field of endeavor, Zhang discloses inputting the original instruction into a natural language model (col. 14, lines 35-38 discloses the cloud has powerful computing capabilities and strong scalability, and has … Natural Language Processing (NLP) models) and generating a standard instruction (col. 14, lines 38-42 discloses it can update and optimize various parameters in real time, process the voice analysis and response in real time, and convert the results into executable commands and returned the same to the video conference device 10) and feedback information (col. 14, lines 49-53 discloses the cloud may also synthesize a voice signal and send it to the video conference device 10, so as to for example notify that the user's requirement represented by the voice signal cannot be understood, or make voice responses to some requirements represented by the voice signal). Lin as modified doesn’t disclose at least one of the terminal device and the projection device is configured to display the feedback information. However, in the same field of endeavor, Faaborg discloses at least one of the terminal device and the projection device is configured to display the feedback information (¶0014, ¶0082 and ¶0101 discloses in the event that an absence of a voice-initiated action has been determined by computing device 200, computing device 200 may cause graphical element 204-D to ‘shake’ as an indication that a voice-initiated action has not been determined from a voice input). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Lin for the purpose of avoiding ambiguity about whether a command was recognized or executed. Regarding claim(s) 13, this/these system claim(s) has/have similar limitations as method claim(s) 2, and therefore rejected on similar grounds. Regarding claim 14, Lin discloses the projection system as claimed in claim 12, wherein the terminal device (102a) is configured to send the original instruction to the projection device (106a), and the projection device (106a) is configured to send the original instruction to the cloud server (104a) (¶0084 discloses the voice assistant 102a is integrated into the projector 106a and the projector 106a can thus integrally perform the operations originally respectively performed by the voice assistant 102a and the projector 106a of FIG. 1; and ¶0036 discloses when the voice assistant 102a receives the voice signal for powering on the Company A projector, the voice assistant 102a transmits the voice signal to the cloud service platform 104a). Regarding claim(s) 15-16, this/these system claim(s) has/have similar limitations as method claim(s) 4 and 5 respectively, and therefore rejected on similar grounds. 7. Claim(s) 23 and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Faaborg. Regarding claim 23, Lin doesn’t disclose the projection device as claimed in claim 22, wherein the processor is configured to: convert a speech instruction into a text instruction and use the text instruction as the original instruction. However, in the same field of endeavor, Faaborg discloses wherein the processor (100) is configured to: convert a speech instruction into a text instruction and use the text instruction as the original instruction (¶0081 discloses computing device 100 may or may not transcribe the speech into text. Computing device 100 may cause one of the display devices, such as presence-sensitive input display 105, projector 120, presence-sensitive display 128, or presence-sensitive display 132 to output a graphical element indicating that audio data is being received). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Lin so that the users audio never leaves the device, reducing risk of misuse. Regarding claim 26, Lin discloses a terminal device (102a) configured to control a projection device (106a) (¶0036 and ¶0037 discloses control a projector using a voice assistant 102a), wherein the terminal device (102a) comprises: in response to the original instruction corresponding to an operation of the projection device (106a) (¶0044 discloses the cloud service platform 104a may analyze the first control command CMD1 according to the first semantic analyzing program, acquire/retrieve or generate the corresponding second control command CMD2 according to the first control command CMD1), output a projector control code through the terminal device (102a) to the projection device (106a) to drive the projection device (106a) to execute an operation corresponding to the projector control code (¶0045 discloses the cloud service platform 104a may transmit … the second control command CMD2 corresponding to ‘power on’ to the management server 108; ¶0047 discloses the management server 108 may… adjust the projector 106a as a first operating state (e.g., powering on, powering off, increasing/decreasing brightness, increasing/decreasing contrast ratio, starting OSD, etc.) according to the corresponding second control command CMD2), wherein the projector control code corresponds to the original instruction (¶0047 discloses the management server 108 may… adjust the projector 106a as a first operating state (e.g., powering on, powering off, increasing/decreasing brightness, increasing/decreasing contrast ratio, starting OSD, etc.) and projection device (106a) information (¶0041 discloses the projector 106a has been registered on the management server 108 in advance based on the first alias AL1 and the unique identification information (e.g., a serial number) of the projector 106a.); and in response to the original instruction not corresponding to the operation of the projection device, [relay] feedback information (¶0046 discloses if the second control command CMD2 corresponding to the first control command CMD1 (e.g., turning on the light) input by the user is not present on the cloud service platform 104a, the cloud service platform 104a may control the voice assistant 102a to output/reply a response sentence of recognition failure (e.g., “Sorry, I don't know what you mean.”)). Lin doesn’t disclose a screen configured to display a control interface; and a processor coupled to the screen, wherein the processor is configured to: receive an original instruction through the control interface; and …display feedback information through the control interface. However, in the same field of endeavor, Faaborg discloses a screen configured to display a control interface (Fig. 1 and ¶0023 discloses a user may provide input (e.g., one or more tap or non-tap gestures, etc.) at or near locations of UID 4 that correspond to locations of user interface 16 at which one or more graphical elements are being displayed as the user interacts with user interface 16 to command computing device 2 to perform a function); and a processor (40) coupled to the screen (Fig. 2 illustrates processor 40 coupled to user interface device 4), wherein the processor (40) is configured to: receive an original instruction through the control interface (Fig. 1 and ¶0023 discloses a user may provide input (e.g., one or more tap or non-tap gestures, etc.) at or near locations of UID 4 that correspond to locations of user interface 16 at which one or more graphical elements are being displayed as the user interacts with user interface 16 to command computing device 2 to perform a function); and …display feedback information through the control interface (¶0014, ¶0082 and ¶0101 discloses in the event that an absence of a voice-initiated action has been determined by computing device 200, computing device 200 may cause graphical element 204-D to ‘shake’ as an indication that a voice-initiated action has not been determined from a voice input). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Lin for the purpose of avoiding ambiguity about whether a command was recognized or executed. 8. Claim(s) 6-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Zhang and in further view of Paluch et al. (US 2014/0280983 A1, hereinafter referred as “Paluch”). Regarding claim 6, Lin as modified doesn’t disclose the control method for the projection device as claimed in claim 1, further comprising: scanning pairing information of the projection device through the terminal device to obtain projection device information of the projection device. However, in a similar field of endeavor, Paluch discloses a scanning pairing information of the projection device (¶0028 discloses user device 320 can also be a digital cable card device, digital television, WebTV client, video game device, digital video disk (DVD) player, digital television streaming device, entertainment computer, and the like) through the terminal device (¶0029 discloses the control device 321 can be a mobile computing device such as a smart phone) to obtain projection device information of the projection device (¶0066 discloses output identification method 665b has been described in terms of an audio tone, similar methods wherein the audio tone is replaced with a bar code, quick response code (QRC), or other visual indicator can be used in place of the tone. The visual indicator can be displayed on a television or monitor attached the user device 320 and received on the control device 321 using a still or video camera; and ¶0046 discloses server 325 can attempt to identify one or more user devices 320 in proximity to the control device 321 for pairing). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Lin for the purpose of allow a guest non-subscriber to access content from the subscriber's account on the guest's mobile device (¶0001). Regarding claim 7, Lin discloses the control method for the projection device (106a) as claimed in claim 6, wherein the step of receiving the standard instruction through the cloud server (104a) further comprises: converting the standard instruction into a projector control code (¶0042 discloses the cloud service platform 104a may include a first semantic analyzing program and a plurality of second control commands CMD2. Each of the second control commands CMD2 is, for example, a control command (e.g., powering on, powering off, increasing/decreasing brightness, increasing/decreasing contrast ratio, starting OSD, etc.) pre-established by the manufacturer of the projector 106a) according to the projection device (106a) information through the cloud server (104a) (¶0041 discloses the projector 106a has been registered on the management server 108 in advance based on the first alias AL1 and the unique identification information (e.g., a serial number) of the projector 106a.). Regarding claim 8, Lin discloses the control method for the projection device (106a) as claimed in claim 7, wherein the step of controlling the projection device (106a) according to the standard instruction further comprises: receiving the projector control code from the cloud server (104a) through the projection device (106a) (¶0045 discloses the cloud service platform 104a may transmit … the second control command CMD2 corresponding to ‘power on’ to the management server 108; ¶0047 discloses the management server 108 may… adjust the projector 106a as a first operating state (e.g., powering on, powering off, increasing/decreasing brightness, increasing/decreasing contrast ratio, starting OSD, etc.) according to the corresponding second control command CMD2), or receiving the projector control code from the cloud server (104a) through the terminal device (102a) and sending the projector control code to the projection device (106a). 9. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Zhang and in further view of Gagnon et al. (US 8,165,886 B1, hereinafter referred as “Gagnon”). Regarding claim 10, Lin discloses the control method for the projection device (106a) as claimed in claim 1, wherein the step of generating the standard instruction according to the original instruction… comprises: in response… determining the original instruction being a single control instruction, generating the standard instruction corresponding to a single operation …(¶0091 discloses the management server 108 may access/control the projector 106a in response to the first alias AL1 and turn on a first application of the projector 106a according to the second control command CMD2). Lin as modified doesn’t disclose a natural language model; and in response to the natural language model determining the original instruction being a multiple control instruction or a complex control instruction, generating the standard instruction corresponding to a plurality of single operations through the natural language model. However, in the same field of endeavor, Gagnon discloses a natural language model (col. 4, lines 44-47); and in response to the natural language model (col. 4, lines 44-47 discloses at least one of a temporal analysis, natural language analysis, and syntactic analysis may be used to determine a context of the speech input) determining the original instruction being a multiple control instruction (co. 28, line 67) or a complex control instruction (col. 16, lines 7-8), generating the standard instruction corresponding to a plurality of single operations through the natural language model (col. 28, line 66 to col. 29, line 14 discloses the system is designed to accept one command per input stream, and multiple commands are input one at a time, however it may be desirable to allow the user to input more than one command per input stream. Note that this differs from a single command with multiple parameters). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Lin so that the user can issue one intent instead of multiple step-by-step commands. 10. Claim(s) 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Zhang, in further view of Faaborg and in further view of Paluch. Regarding claim(s) 17-19, this/these system claim(s) has/have similar limitations as method claim(s) 6-8 respectively, and therefore rejected on similar grounds. 11. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lin in view of Zhang, in further view of Faaborg and in further view of Gagnon. Regarding claim(s) 21, this/these system claim(s) has/have similar limitations as method claim(s) 10, and therefore rejected on similar grounds. Allowable Subject Matter 12. Claims 9, 20 and 27 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PRIYANK J SHAH whose telephone number is (571)270-3732. The examiner can normally be reached on 10:00 - 6:00 M-F. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached on 5712727671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PRIYANK J SHAH/Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Jul 26, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602195
METHOD AND APPARATUS FOR CONTROLLING DEVICE BASED ON TAPPING AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12592186
DISPLAY DEVICE AND DISPLAY APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12592206
DISPLAY DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12591313
WIDGET INTERACTION FOR EXTENDED REALITY (XR) APPLICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12567354
DISPLAY DEVICE AND METHOD OF DRIVING THEREOF
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
86%
With Interview (+18.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 584 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month