Prosecution Insights
Last updated: April 19, 2026
Application No. 17/755,046

SIGNAL PROCESSING DEVICE AND DISPLAY APPARATUS FOR VEHICLES INCLUDING THE SAME

Non-Final OA §103§112
Filed
Apr 20, 2022
Examiner
NGUYEN, TUAN MINH
Art Unit
2198
Tech Center
2100 — Computer Architecture & Software
Assignee
LG Electronics Inc.
OA Round
3 (Non-Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
7 granted / 14 resolved
-5.0% vs TC avg
Strong +58% interview lift
Without
With
+57.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
23 currently pending
Career history
37
Total Applications
across all art units

Statute-Specific Performance

§101
26.6%
-13.4% vs TC avg
§103
46.5%
+6.5% vs TC avg
§102
5.8%
-34.2% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 14 resolved cases

Office Action

§103 §112
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/06/2025 has been entered. Claims 1-20 are pending. Claims 1 and 20 are in independent form. Claims 1, 11, and 20 are amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to the applicant’s remarks and arguments filed on 11/06/2025. Claims 1, 11, and 20 were amended. Claims 1 – 20 remain pending in the application. Claims 1 – 20 are being considered on the merits. The amendment filed on 11/06/2025 is objected to under 35 U.S.C. 132(a) because it introduces new matter into the disclosure. 35 U.S.C. 132(a) states that no amendment shall introduce new matter into the disclosure of the invention. The added material which is not supported by the original disclosure is as follows: - The independent claims recite the limitation “receives memory data based on data communication through the physical device driver”. The examiner cannot find the support for the above underline limitation from the Specification filed on 04/20/2022. If the applicant believes the above limitation is supported by the Specification filed on 04/20/2022, the applicant can clearly point out which paragraphs or Figures contain the above limitation. Applicant is required to cancel the new matter in the reply to this Office Action. Response to Arguments The applicant’s remarks and/or arguments, filed on 11/06/2025 have been fully considered with the following result(s). The examiner is entitled to give claim limitations their broadest reasonable interpretation in light of the specification. See MPEP 2111 [R-1] Interpretation of Claims-Broadest Reasonable Interpretation. The applicant always has the opportunity to amend the claims during prosecution, and broad interpretation by the examiner reduces the possibility that the claim, once issued, will be interpreted more broadly than is justified. In re Prater, 162 USPQ 541,550-51 (CCPA 1969). Response to Drawings Objection Remarks Applicant’s argument filed on 11/06/2025 regarding Drawings Objection has been fully considered. The Drawings Objection has been withdrawn. Response to 35 USC §103 Remarks Applicant’s argument filed on 11/06/2025 regarding 35 USC § 103 rejections have been fully considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1 – 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention Regarding claim 1, the claim recites the limitation “receives memory data based on data communication through the physical device driver”. The examiner cannot find the support for the above underline limitation from the Specification filed on 04/20/2022. The closest support that the examiner can find is in paragraph [0143] and [0190] that discloses the “memory data”, but there is no paragraph that disclose the “based on data communication through the physical device driver”. If the applicant believes the above limitation is supported by the Specification filed on 12/30/2021, the applicant can clearly point out which paragraphs or Figures contain the above limitation. Claims 20 is also rejected due to the same reason as mention in the rejection of claim 1. Claims 2 – 19 are also rejected due to the rejection of the independent claim 1. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 – 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the claim recites the limitation: “the first virtual machine in the processor includes a physical device driver and receives memory data based on data communication through the physical device driver ......... the second virtual machine and the third virtual machine do not include a physical device driver”.”. There is insufficient antecedent basis for this limitation in the claim. It is unclear whether the “a physical device driver” that do not include in the second or third VMs is the same as the “a physical device driver” that include in the first VMs. Claims 20 is also rejected due to the same reason as mention in the rejection of claim 1. Claims 2 – 19 are also rejected due to the rejection of the independent claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 12, and 18 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over NARAYAN et al. US Pub. No. US 20200218443 A1 (hereafter NARAYAN), in further view of ROPER et al. US Pub. No. US 20210264559 A1 (hereafter ROPER), ATSMON et al. US Pub. No. US 20170039084 A1 (hereafter ATSMON), and OSDev Wiki NPL Virtio Regarding claim 1, NARAYAN teaches the invention substantially as claimed: A signal processing device comprising a processor configured to perform signal processing for a display located in a vehicle (FIG. 1, Fig. 2 and [0109]: “The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein.”) The citations disclose at FIG. 1 the displays located in a vehicle, FIG. 2 the system on chip that can be installed under the dashboard 220 or another place within the vehicle, and at [0109] a digital signal processor/processor for processing signal. wherein the processor executes first to third virtual machines on a hypervisor in the processor ([0052]: “The SoC 202 can include a number of virtual machines, for example, virtual machine A 205 and virtual machine B 207. The virtual machines are managed by a hypervisor 203.”) The citation discloses a number of virtual machines/first to third virtual machines are executed and managed by the hypervisor. and the first virtual machine in the processor .........., and receives memory data based on data communication through the physical device driver or receives touch input to the first display or the second display and transmits information regarding the touch input to the second virtual machine or the third virtual machine. (FIG. 1, FIG. 3 and [0062]: “When guest OS A 209 receives a request from a user—for example, a driver or passenger selecting a song to play at a control display—the event dispatcher 319 can launch an event, which can be processed by an event listener 309 registered with the event dispatcher 319.” and [0023]: “Each of the first control display and the second control display is a touchscreen display capable of displaying an user-touch selectable application menu”) the citations disclose at FIG. 1 the control environment comprise multiple display Area (A, B, C, and a driver Tablet), and at [0023] user-touch selectable/touch input in the first or second control display, and FIG. 3 and [0062] disclose when the user provides an input, it is transferred to the VM B 207/first VM that has the Event Listener 309 for processing, and transfer back to the Event Dispatcher on the VM A 205/second VM for launching/transmits information regarding the touch input to the second virtual machine. the second virtual machine is operated for a first display (FIG. 2 and [0055]: “For one embodiment, the image composition logic 218 can be invoked when guest OS A is booted, and can operate to combine a pre-determined background image 217 and an image from driving-critical applications 215 to create a composite image, and send 221 the composite image to the hardware platform 204. The image composition logic 218 can also determine when to display images only from guest OS A 209 (e.g., when a vehicle is backing up)”) The citation discloses the VM A/second VM comprises the component image composition logic 218 to send the image 221 to the hardware platform 204 to display on the dashboard screen 222/first display. NARAYAN does not explicitly teach the third virtual machine is operated for a second display; the first virtual machine in the processor includes a physical device driver; the second virtual machine and the third virtual machine do not include a physical device driver, the second virtual machine includes a virtio-backend interface to receive the memory data from the first virtual machine, and the first virtual machine in the processor is configured to receive and process vehicle sensor data and camera image data, and transmit the processed vehicle sensor data or camera image data to the second virtual machine or the third virtual machine. However, ROPER teaches the third virtual machine is operated for a second display (e.g. FIG. 16 and [0160]: “FIG. 16 illustrates one such embodiment which implements a virtual display model for an in-vehicle infotainment (IVI) system. In the illustrated embodiment, a real-time OS (RTOS) 1670 and associated apps 1680 are supported by primary service/host virtual machine (VM) 1601, the instrument cluster apps 1681 are executed on an RTOS 1671 within an instrument cluster VM 1602, front infotainment apps 1682 are executed on a Unux/Android OS 1672 within a front infotainment VM 1603”) The citation discloses the infotainment (front) VM/third VM operate the display Navigation infotainment (front) 1611/second display. the first virtual machine in the processor includes a physical device driver; (e.g. FIG. 16, [0163]: “Each operating system includes an assigned graphics driver for accessing graphics processing resources of the GPU 1648 and display 1630. The RTOS 1670 of the service/host VM 1601, for example, includes a host GPU driver 1660 (which is not a virtual driver in one embodiment).” and [0165]: “As the VMs 1602-1604 are unaware of the virtualized execution environment, the hypervisor 1650 traps instructions/commands generated from the VDDs 1665-1667 and invokes the backend services 1661 in the service/host VM 1601 to configure the hardware display through the host GPU driver 1660 (a PF driver) on behalf of the requesting virtual function driver 1662-1664, in accordance with the posted framebuffer descriptor.”) The citation discloses the host VM/first VM comprise the host GPU driver/physical device the second virtual machine and the third virtual machine do not include a physical device driver, (e.g. FIG. 16, [0163]: “Each operating system includes an assigned graphics driver for accessing graphics processing resources of the GPU 1648 and display 1630. The RTOS 1670 of the service/host VM 1601, for example, includes a host GPU driver 1660 (which is not a virtual driver in one embodiment). The operating systems 1671-1673 of the other VMs 1602-1604 include virtual function drivers (VFDs) 1662-1664, respectively, each of which includes a virtual display driver (VDD) component 1665-1667, respectively.”) The citation discloses at FIG. 16 that the other VMs (VMs 1602-1604)/second and third VMs, only include the includes a virtual display driver (VDD), and does not include the host GPU driver. Therefore, it would imply that the VMs 1602-1604 do not include a physical device driver. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the third virtual machine is operated for a second display; the first virtual machine in the processor includes a physical device driver; the second virtual machine and the third virtual machine do not include a physical device driver as taught in ROPER’s invention into NARAYAN’s invention because it helps the system to improve the touch input handling by ensuring each display processes inputs independently to reduce conflicts and enhance responsiveness, and also helps to increase system stability and optimizes resource allocation in vehicle display system. However, ATSMON teaches the second virtual machine includes interface to receive the memory data from the first virtual machine (e.g. [0069]: “As shown at 404, the memory 108 of the ADAS processing unit(s) 103 or the ADAS server is accessed via the hypervisor 107 to acquire the ADAS data and to use it as an input for processing an ADAS enhancing function or an IVI function. For example, this process allows the IVI VM for example to receive data captured by vehicle sensors and processed by the ADAS processing unit(s) 103. The IVI VM can now analyze the received data and/or transmit it to a remote server via a network interface. The data may be single images, continuous video and/or multiple frames received by the ADAS server 102, optionally with image or video metadata (e.g. time, location and/or the like) as received from one or more image sensors and/or other sensors which are directly connected to the ADAS server 102.”) The citation discloses at [0069] the IVI VM/second VM, that receive the data captured by vehicle sensors and processed by the ADAS at the memory 108/memory data, to analyze. As the ADAS server/first VM, and the IVI VM/second VM, can exchange data, it would imply that there is an interface between the VMs for data exchange. and the first virtual machine in the processor is configured to receive and process vehicle sensor data and camera image data, [0046]: “The SoC 100 hosts, for example on an on-chip memory 101, an ADAS server 102 adapted to receive data from a plurality of in vehicle sensors...... For example, the vehicle sensors may include one or more cameras in a cabin of the vehicle that may capture images of the passengers and/or driver and/or scene information, such as lighting conditions within the vehicle or weather outside of the vehicle 102. Vehicle sensors may include one or more global positioning system (GPS) devices, Radar sensor, Light Detection And Ranging (LIDAR) system sensor, microphones, seat weight sensors, and/or other type of data gathering elements associate with the vehicle. Other vehicle sensors examples include laser sensors, infrared sensors, accelerometers, or any combination thereof. In addition, the ADAS server 102 may have a direct access to hardware resources such as a CAN bus, Vehicle-to-Everything (V2X) interface and/or any other interfaces.”, and [0047]: “The ADAS server 102 is hosted and executes, using ADAS designated processing unit(s) 103, ADAS functions intended to help a driver in the driving process for increasing vehicle and road safety, for instance by providing alert and convenience and/or for operating vehicle control functions which operate vehicle system automatically.” and [0059]: “the ADAS server 102 has direct access to hardware resources such as vehicle sensors”) The citation discloses at [0046] the ADAS server 102/first VM, receive data from a plurality of in vehicle sensors, and the vehicle comprises multiple sensors and one or more cameras for capture image. At [0047] discloses the ADAS server uses the ADAS processing unit 103 for execute the ADAS functions. At [0059] discloses the ADAS server/first VM has direct access to vehicle sensor. and transmit the processed vehicle sensor data or camera image data to the second virtual machine or the third virtual machine. [0030]: “The hypervisor allows acquiring output(s) of ADAS functions executed on an ADAS designated processor and using these output(s) for completing functions of the one or more virtual machines, for instance ADAS enhancing functions or IVI functions.” and [0050]: “The ADAS enhancing VM 105 is adapted to execute ADAS enhancing functions. An ADAS enhancing function is a function that receives outputs of ADAS functions executed by the ADAS server as an input for calculating an enhanced data not provided by the ADAS server 102” and FIG. 5 and [0069]: “Reference is now also made to FIG. 5 which is a flowchart 400 of an exemplary usage of data calculated by an ADAS function(s) executed using the ADAS processing unit(s) 103 as an input for an ADAS enhancing function or an IVI function executed by the application processing unit(s) 104, according to some embodiments of the present invention....... As shown at 404, the memory 108 of the ADAS processing unit(s) 103 or the ADAS server is accessed via the hypervisor 107 to acquire the ADAS data and to use it as an input for processing an ADAS enhancing function or an IVI function. For example, this process allows the IVI VM for example to receive data captured by vehicle sensors and processed by the ADAS processing unit(s) 103. The IVI VM can now analyze the received data and/or transmit it to a remote server via a network interface.”) The citation discloses at [0030] that the output from the execution of ADAS functions can be acquired and used by other VMs. At [0050] disclose the ADAS enhancing VM 105/third VM, execute the ADAS enhancing function that receives output from ADAS functions executed by ADAS server, or at [0069] discloses the IVI VM/second VM, that receive the data captured by vehicle sensors and processed by the ADAS to analyze. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the first virtual machine in the processor is configured to receive and process vehicle sensor data and camera image data, and transmit the processed vehicle sensor data or camera image data to the second virtual machine or the third virtual machine, as taught in ATSMON’s invention into NARAYAN’s invention because it by using a dedicate virtual machine for analyzing the data from the multiple sensors of the vehicle helps making the system faster and more responsive since data transferred to the other VMs are already processed, improving and maintain smooth cooperation between VMs without adding extra load to the other VMs. However, OSDevWiki teaches a virtio-backend interface (e.g. page 1 “VirtIO is a standardized interface which allows virtual machines access to simplified "virtual" devices, such as block devices, network adapters and consoles. Accessing devices through VirtIO on a guest VM improves performance over more traditional "emulated "devices, as VirtIO devices require only the bare minimum setup and configuration needed to send and receive data, while the host machine handles the majority of the setup and maintenance of the actual physical hardware.”) The citation discloses the VirtIO interface that enables the virtual machines to send and receive data with virtual device/second VMs. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the virtio-backend interface, as taught in OSDevWiki’s invention into NARAYAN’s invention because it by using VirtIO interface for sending and receiving data, it helps making the system faster and improves performance, since VirtIO devices require only the bare minimum setup and configuration needed to send and receive data. Regarding claim 12, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, teaches the signal processing device of claim 1, and NARAYAN further teaches wherein the first virtual machine is configured to transmit a composite overlay generated by combining an overlay from the second virtual machine and at least one overlay from the third virtual machine with each other to the second virtual machine or the third virtual machine. (NARAYAN - FIG. 2 and [0057]: “For one embodiment, after the guest operating system B 211 boots, it can create a transparent composite image from the infotainment applications 213, and send the transparent composite image to the guest operating system A 209, where the image composition logic 218 can combine the composite image with the composite image previously created, to create an overall composite image. The overall composite image, when being displayed on the dashboard screen, can show the seamless merging of the transparent image with the background image.”) The citation discloses the guest operating system B 211, which run on the VM B/first VM, create a transparent composite image/composite image, then send the transparent composite image to the guest operating system A 209, which run on the VM A/second VM for displaying. The teaching of NARAYAN does not explicitly indicate an example of the system contain 3 VMs, but at [0052]: “The SoC 202 can include a number of virtual machines, for example, virtual machine A 205 and virtual machine B 207.” discloses that the SoC 202 can have a number of VMs and the 2 VMs is an example system. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine, an instrument cluster VM 1602/ second virtual machine, and front infotainment VM 1603/ third virtual machine. By combining the method from NARAYAN with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. Regarding claim 18, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, and ROPER further teaches wherein the processor further executes a fourth virtual machine operated for a third display on the hypervisor in the processor (ROPER - FIG. 16 and [0160]: “rear infotainment apps 1683 are executed on a Linux/Android OS 1673 within a rear infotainment VM 1604.”) The citation discloses the infotainment (Rear) VM 1604/fourth virtual machine operates the game infotainment (rear) 1612/third display. NARAYAN further teaches and the first virtual machine in the processor is configured to receive touch input to any one of the first display to the third display and transmit information regarding the received touch input to any one of the second virtual machine to the fourth virtual machine. (NARAYAN - FIG. 1, FIG. 3 and [0062]: “When guest OS A 209 receives a request from a user—for example, a driver or passenger selecting a song to play at a control display—the event dispatcher 319 can launch an event, which can be processed by an event listener 309 registered with the event dispatcher 319.” and [0023]: “Each of the first control display and the second control display is a touchscreen display capable of displaying an user-touch selectable application menu”) the citations disclose at FIG. 1 the control environment comprise multiple display Area (A, B, C, and a driver Tablet), and at [0023] user-touch selectable/touch input in the first or second control display, and FIG. 3 and [0062] disclose when the user provides an input, it is transferred to the VM B 207/first VM that has the Event Listener 309 for processing, and transfer back to the Event Dispatcher on the VM A 205/second VM for launching/transmits information regarding the touch input to the second virtual machine. Regarding claim 19, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, and ROPER further teach transmits an overlay indicating the processed wheel speed sensor data or speed information corresponding to the processed wheel speed sensor data to at least one of the second virtual machine or the third virtual machine. (ROPER - [0155]: “In such a system, the VF display model is used to drive local display functionalities in a virtual machine (VM) by directly posting the guest frame buffer to the local monitor or exposing the guest frame buffer information to the host. For example, in current In-Vehicle Infotainment (IVI) systems, there is a trend to use virtualization technology to consolidate a safety-critical digital instrument cluster which displays safety metrics (e.g. speed, torque and so on) along with some IVI systems displaying infotainment Apps”) The citation disclose the VF (Virtual function) display uses VM show its graphic output to local monitor, and one of the VM is responsible for running the safety-critical digital instrument cluster which displays safety metrics, such as speed. ATSMON further teaches wherein the first virtual machine receives and processes wheel speed sensor data of the vehicle ([0046]: “The SoC 100 hosts, for example on an on-chip memory 101, an ADAS server 102 adapted to receive data from a plurality of in vehicle sensors...... For example, the vehicle sensors may include one or more cameras in a cabin of the vehicle that may capture images of the passengers and/or driver and/or scene information, such as lighting conditions within the vehicle or weather outside of the vehicle 102. Vehicle sensors may include one or more global positioning system (GPS) devices, Radar sensor, Light Detection And Ranging (LIDAR) system sensor, microphones, seat weight sensors, and/or other type of data gathering elements associate with the vehicle. Other vehicle sensors examples include laser sensors, infrared sensors, accelerometers, or any combination thereof. In addition, the ADAS server 102 may have a direct access to hardware resources such as a CAN bus, Vehicle-to-Everything (V2X) interface and/or any other interfaces.”, and [0047]: “The ADAS server 102 is hosted and executes, using ADAS designated processing unit(s) 103, ADAS functions intended to help a driver in the driving process for increasing vehicle and road safety, for instance by providing alert and convenience and/or for operating vehicle control functions which operate vehicle system automatically. Example of an ADAS functions ...... adaptive cruise control (ACC), ........ intelligent speed adaptation or intelligent speed advice (ISA)” and [0059]: “the ADAS server 102 has direct access to hardware resources such as vehicle sensors”) The citation discloses at [0046] the ADAS server 102/first VM, receive data from a plurality of in vehicle sensors and At [0047] discloses the ADAS server uses the ADAS processing unit 103 for execute the ADAS functions, and some of the functions include ACC, intelligent speed adaptation or intelligent speed advice (ISA), which related to the speed of the vehicle. Therefore, it implies at [0046] and [0047] that one of the vehicle sensors must be wheel speed sensor. At [0059] discloses the ADAS server/first VM has direct access to vehicle sensor. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein the first virtual machine receives and processes wheel speed sensor data of the vehicle, as taught in ATSMON’s invention into NARAYAN and ROPER’s invention because using a dedicate virtual machine for analyzing the data from the wheel speed sensor helps to enable more efficiently transmission of speed related information overlays to the second or third virtual machine, maintain smooth cooperation between VMs, and improving driver awareness and passenger interaction with vehicle performance metrics. Regarding claim 20, the claim is a display apparatus for vehicles claim that having similar limitations cited in claim 1. Thus, claim 20 is also rejected under the same rational as cited in the rejection of rejected claim 1. Claims 2, 4 – 8, 13, 14, 16, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over NARAYAN, ROPER, ATSMON and OSDevWiki, in further view of LEE et al. US Pub. No. US 20200159562 A1 (hereafter LEE) Regarding claim 2, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein the touch input is not input to the second virtual machine and the third virtual machine. However, LEE teaches wherein the touch input is not input to the second virtual machine and the third virtual machine. (FIG. 3, FIG. 8 and [0099]: “For example, referring to FIG. 8, while displaying the screen on the first display 240, the second display 245, or a combination thereof, the processor 210 may receive the user input using the input device 230. For example, using a touch sensor 810 included in the input device 230, the processor 210 may receive a touch input as the user input. Through a touch driver 820 included in the first system software that is controlled using a hypervisor 815 (e.g., the hypervisor 537), the touch sensor 810 may provide information about the touch input, to an input manager 825 included in the first system software that is controlled using the hypervisor 815. The input manager 825 may provide the information about the touch input, to a projection receiver 830 (e.g., the projection receiver 545) included in the first system software that is controlled using the hypervisor 815. The projection receiver 830 may provide the information about the touch input, to a device driver 835 included in the first system software that is controlled using the hypervisor 815. The device driver 835 may provide the information about the touch input, to a connector 840 (e.g., the connector 535). The connector 840 may transmit the information about the touch input, to a connector 845 (e.g., the connector 530) of the first electronic device 101. The connector 845 may provide the information about the touch input, to a device driver 855 (e.g., the device driver 520) included in the first system software that is controlled using the hypervisor 850 (e.g., the hypervisor 525). The device driver 855 may provide the information about the touch input, to a projection sender 860 (e.g., the projection sender 515) included in the first system software that is controlled using the hypervisor 850. The projection sender 860 may provide the information about the touch input, to an input manager 865 included in the first system software that is controlled using the hypervisor 850. The input manager 865 may acquire data about a position of receiving the touch input, from the information about the touch input, and provide the acquired data to an input positioner 870 included in the first system software that is controlled using the hypervisor 850. Based on the data, the input positioner 870 may identify a function associated with the touch input.”) The citations disclose at FIG. 3 and [0099] the processor 210 of the second electric device 102 receive the touch input and transfer the touch input to the processor 120 of the first electric device 101. The teaching of LEE does not explicitly indicate that the second electronic device as the first virtual machine and the first electronic device as the second virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine, an instrument cluster VM 1602/ second virtual machine, and front infotainment VM 1603/ third virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein the touch input is not input to the second virtual machine and the third virtual machine, as taught in LEE’s invention into NARAYAN, ROPER, ATSMON and OSDevWiki’s invention because this centralizes touch input processing within the first virtual machine helps to optimize process efficiency, since the system can reduce redundancy and improve resource management. In addition, it also helps to maintain stability and fault tolerance, since there is only one VM manages the input, it will not disrupt the touch input processing, and maintain overall system stability. Regarding claim 4, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein, in response to touch input corresponding to an overlay provided by the third virtual machine, among a plurality of overlays displayed on the first display, the first virtual machine in the processor is configured to transmit the information regarding the touch input not to the second virtual machine but to the third virtual machine. However, LEE teaches wherein, in response to touch input corresponding to an overlay provided by the third virtual machine, among a plurality of overlays displayed on the first display (e.g. [0093]: “For example, referring to FIG. 6, the processor 210 may display a screen 610 on the display 555 (e.g., the first display 240) …… The screen 610 may include …... at least one second visual object 602-1, 602-2, 602-3, 602-4, 602-5 (which may be referred to hereinafter as 602-1 to 602-5) provided from a second virtual machine.” and e.g. [0096]: “For example, referring to FIG. 7, while displaying the screen 610, the processor 210 may ……. receive a user input 710 to the at least one second visual object 602-5.”) The citations disclose at [0093] the screen 610 displays the first display 240 and the second visual object 602-5/overlay provided by the second virtual machine/third virtual machine. At [0096], user input 710/touch input to the second visual object 602-5 is received. the first virtual machine in the processor is configured to transmit the information regarding the touch input not to the second virtual machine but to the third virtual machine. ([0096]: “For another example, based on identifying that the user input corresponds to the at least one second visual object 602-5, the processor 120 of the first electronic device 101 may execute a function associated with the at least one second visual object 602-5 in the second virtual machine, and transmit information about the function executed in the second virtual machine, to the second electronic device 102.”) The citation discloses in-response to the input, the first electronic device transmits the input to the second virtual machine/third virtual machine for execution. The teaching of LEE does not explicitly indicate that the first electronic device as the first virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. Regarding claim 5, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein, in response to touch input corresponding to an overlay provided by the second virtual machine, among a plurality of overlays displayed on the first display, the first virtual machine in the processor is configured to transmit the information regarding the touch input to the second virtual machine. However, LEE teaches wherein, in response to touch input corresponding to an overlay provided by the second virtual machine, among a plurality of overlays displayed on the first display (e.g. [0093]: “For example, referring to FIG. 6, the processor 210 may display a screen 610 on the display 555 (e.g., the first display 240) …… The screen 610 may include …... at least one first visual object 601-1, 601-2, 601-3, 601-4, 601-5 (which may be referred to hereinafter as 601-1 to 601-5) provided from a first virtual machine (e.g., the first virtual machine 310 of FIG. 3).” and [0096]: “For example, referring to FIG. 7, while displaying the screen 610, the processor 210 may ……. receive a user input 710 to the at least one first visual object 601-1.”) The citations disclose at [0093] the screen 610 displays the first display 240 and the first visual object 601-1/overlay provided by the first virtual machine/second virtual machine. At [0096] discloses user input 710/touch input to the first visual object 601-1 is received. the first virtual machine in the processor is configured to transmit the information regarding the touch input to the second virtual machine. ([0096]: “For example, based on identifying that the user input 705 corresponds to the at least one first visual object 601-1, the processor 120 of the first electronic device 101 may execute a function associated with the at least one first visual object 601-1 in the first virtual machine migrated from the second electronic device 102, and transmit information about the function executed in the first virtual machine, to the second electronic device 102”) The citation discloses in-response to the input, the first electronic device transmits the input to the first virtual machine/second virtual machine for execution. The teaching of LEE does not explicitly indicate that the first electronic device as the first virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. Regarding claim 6, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein, in response to touch input corresponding to an overlay provided by the second virtual machine, among a plurality of overlays displayed on the second display, the first virtual machine in the processor is configured to transmit the information regarding the touch input to the second virtual machine. However, LEE teaches wherein, in response to touch input corresponding to an overlay provided by the second virtual machine, among a plurality of overlays displayed on the second display, (e.g. [0093]: “For example, referring to FIG. 6, the processor 210 may display a screen 610 on the display 555 (e.g., …... the second display 245) …… The screen 610 may include …... at least one first visual object 601-1, 601-2, 601-3, 601-4, 601-5 (which may be referred to hereinafter as 601-1 to 601-5) provided from a first virtual machine (e.g., the first virtual machine 310 of FIG. 3).” and [0096]: “For example, referring to FIG. 7, while displaying the screen 610, the processor 210 may ……. receive a user input 710 to the at least one first visual object 601-1.”) The citations disclose at [0093] the screen 610 displays the second display 245 and the first visual object 601-1/overlay provided by the first virtual machine/second virtual machine. At [0096] discloses user input 710/touch input to the first visual object 601-1 is received. the first virtual machine in the processor is configured to transmit the information regarding the touch input to the second virtual machine. ([0096]: “For example, based on identifying that the user input 705 corresponds to the at least one first visual object 601-1, the processor 120 of the first electronic device 101 may execute a function associated with the at least one first visual object 601-1 in the first virtual machine migrated from the second electronic device 102, and transmit information about the function executed in the first virtual machine, to the second electronic device 102”) The citation discloses in-response to the input, the first electronic device transmits the input to the first virtual machine/second virtual machine for execution. The teaching of LEE does not explicitly indicate that the first electronic device as the first virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. Regarding claim 7, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein, in response to touch input corresponding to an overlay provided by the third virtual machine, among a plurality of overlays displayed on the second display, the first virtual machine in the processor is configured to transmit the information regarding the touch input to the third virtual machine. However, LEE teaches wherein, in response to touch input corresponding to an overlay provided by the third virtual machine, among a plurality of overlays displayed on the second display, (e.g. [0093]: “For example, referring to FIG. 6, the processor 210 may display a screen 610 on the display 555 (e.g., …... the second display 245) …… The screen 610 may include …... at least one second visual object 602-1, 602-2, 602-3, 602-4, 602-5 (which may be referred to hereinafter as 602-1 to 602-5) provided from a second virtual machine.” and [0096]: “For example, referring to FIG. 7, while displaying the screen 610, the processor 210 may ……. receive a user input 710 to the at least one second visual object 602-5.”) The citations disclose at [0093] the screen 610 displays the second display 245 and the second visual object 602-5/overlay provided by the second virtual machine/third virtual machine. At [0096], user input 710/touch input to the second visual object 602-5 is received. the first virtual machine in the processor is configured to transmit the information regarding the touch input to the third virtual machine. ([0096]: “For another example, based on identifying that the user input corresponds to the at least one second visual object 602-5, the processor 120 of the first electronic device 101 may execute a function associated with the at least one second visual object 602-5 in the second virtual machine, and transmit information about the function executed in the second virtual machine, to the second electronic device 102.”) The citation discloses in-response to the input, the first electronic device transmits the input to the second virtual machine/third virtual machine for execution. The teaching of LEE does not explicitly indicate that the first electronic device as the first virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. Regarding claim 8, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein the first virtual machine in the processor is configured to detect an overlay corresponding to the touch input, and transmit the information regarding the touch input to a virtual machine providing the detected overlay, which is one of the second virtual machine and the third virtual machine. However, LEE teaches wherein the first virtual machine in the processor is configured to detect an overlay corresponding to the touch input, (e.g. [0096]: “For example, referring to FIG. 7, while displaying the screen 610, the processor 210 may receive a user input 705 to the at least one first visual object 601-1 or receive a user input 710 to the at least one second visual object 602-5. Based on receiving a user input (e.g., the user input 705 or the user input 710)”) The citation discloses the user input 705 or 710/touch input to the first visual object 601-1 or the second visual object 602-5/overlay is received/detected. and transmit the information regarding the touch input to a virtual machine providing the detected overlay, which is one of the second virtual machine and the third virtual machine. ([0096]: “. For example, based on identifying that the user input 705 corresponds to the at least one first visual object 601-1, the processor 120 of the first electronic device 101 may execute a function associated with the at least one first visual object 601-1 in the first virtual machine migrated from the second electronic device 102, and transmit information about the function executed in the first virtual machine, to the second electronic device 102. For another example, based on identifying that the user input corresponds to the at least one second visual object 602-5, the processor 120 of the first electronic device 101 may execute a function associated with the at least one second visual object 602-5 in the second virtual machine, and transmit information about the function executed in the second virtual machine, to the second electronic device 102.”) The citation discloses the user input/touch input that corresponding to the first visual object or the second visual object is transmitted to the corresponding VM for execution. Regarding claim 13, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein in response to the information regarding the touch input corresponding to at least one overlay from the third virtual machine, the first virtual machine is configured to transmit the information regarding the touch input to the third virtual machine, and the third virtual machine is configured to change the at least one overlay based on the touch input and display the changed overlay on the second display. However, LEE teaches wherein in response to the information regarding the touch input corresponding to at least one overlay from the third virtual machine (e.g. [0093]: “For example, referring to FIG. 6, the processor 210 may display a screen 610 on the display 555 (e.g., the first display 240) …… The screen 610 may include …... at least one second visual object 602-1, 602-2, 602-3, 602-4, 602-5 (which may be referred to hereinafter as 602-1 to 602-5) provided from a second virtual machine.” and e.g. [0096]: “For example, referring to FIG. 7, while displaying the screen 610, the processor 210 may ……. receive a user input 710 to the at least one second visual object 602-5.”) The citations disclose at [0093] the screen 610 displays the first display 240 and the second visual object 602-5/overlay provided by the second virtual machine/third virtual machine. At [0096], user input 710/touch input to the second visual object 602-5 is received. the first virtual machine is configured to transmit the information regarding the touch input to the third virtual machine. ([0096]: “For another example, based on identifying that the user input corresponds to the at least one second visual object 602-5, the processor 120 of the first electronic device 101 may execute a function associated with the at least one second visual object 602-5 in the second virtual machine, and transmit information about the function executed in the second virtual machine, to the second electronic device 102.”) The citation discloses in-response to the input, the first electronic device transmits the input to the second virtual machine/third virtual machine for execution. The teaching of LEE does not explicitly indicate that the first electronic device as the first virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. and the third virtual machine is configured to change the at least one overlay based on the touch input and display the changed at least one overlay on the second display. (LEE-[0093: “one second visual object 602-1, 602-2, 602-3, 602-4, 602-5 (which may be referred to hereinafter as 602-1 to 602-5) provided from a second virtual machine.” and [0095]: “This screen may be displayed on a display (e.g., the first display 240, the second display 245, or a combination thereof) of the second electronic device 102 illustrated in FIG. 2.” and [0099]: “For another example, referring to FIG. 7, the processor 210 may display a screen 720 including a second visual object 602-5 having at least partially altered visual element, based on the information about the function that is received from the first electronic device 101 in response to transmitting the information about the user input 705 to the first electronic device 101.”) the citations disclose at [0099] that the visual object 602-5, which is provided by the second virtual machine/third virtual machine at [0093], is changed base on the user input, and the change is reflected on the screen 720, with the visual object 602-5 having at least partially altered visual element, and at [0095] discloses the screen maybe display on the second display 245. Regarding claim 14, NARAYAN, in view of ROPER, ATSMON, OSDevWiki, and LEE, discloses the signal processing device of claim 13, and LEE further teaches wherein the third virtual machine is configured to transmit the changed at least one overlay to the first virtual machine (LEE - [0099]: “For another example, the input positioner 870 may identify that the user input 710 corresponds to the function associated with the at least one second visual object 602-5, and provide the identifying result to the processor 120. Based on the identifying result, the processor 120 may execute the function in the first virtual machine or the second virtual machine, and transmit information about the executed function to the second electronic device 102 through operations exemplified through FIG. 5.”) the citation disclose the second virtual machine/third virtual machine, after executing the function regarding the user input to the visual object 602-5, transmits the executed function/changed overlay to the second electronic device/first virtual machine. and the first virtual machine is configured to transmit the changed at least one overlay to the second virtual machine. ([0099]: “while displaying the screen on the first display 240, the second display 245, or a combination thereof, the processor 210 may receive the user input using the input device 230…… Based on the information about the function, the second electronic device 102 may alter a screen that is being displayed into another screen, or update the screen that is being displayed.”) The citation discloses the second electronic device 102/first virtual machine alter a screen that is being displayed into another screen/ transmit the changed overlay to the second virtual machine. Another screen could be the first display 240/second virtual machine. LEE teaches the second electronic device 102/first virtual machine updates/transmit the change overlay the displaying the screen on the first display 240, the second display 245, or a combination thereof, but does not explicitly teach the first display is operated by the second virtual machine. However, ROPER teaches the first display is operated by the second virtual machine (at FIG. 16 discloses the instrument cluster VM 1602/second VM operates the display instrument cluster 1610/first display). By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. Regarding claim 16, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein in response to the information regarding the touch input corresponding to an overlay from the second virtual machine, the first virtual machine is configured to transmit the information regarding the touch input to the second virtual machine, and the second virtual machine is configured to change the overlay based on the touch input and display the changed overlay on the second display. However, LEE teaches teach wherein in response to the information regarding the touch input corresponding to an overlay from the second virtual machine, (e.g. [0093]: “For example, referring to FIG. 6, the processor 210 may display a screen 610 on the display 555 (e.g., the first display 240) …… The screen 610 may include …... at least one first visual object 601-1, 601-2, 601-3, 601-4, 601-5 (which may be referred to hereinafter as 601-1 to 601-5) provided from a first virtual machine (e.g., the first virtual machine 310 of FIG. 3).” and [0096]: “For example, referring to FIG. 7, while displaying the screen 610, the processor 210 may ……. receive a user input 710 to the at least one first visual object 601-1.”) The citations disclose at [0093] the screen 610 displays the first display 240 and the first visual object 601-1/overlay provided by the first virtual machine/second virtual machine. At [0096] discloses user input 710/touch input to the first visual object 601-1 is received. the first virtual machine is configured to transmit the information regarding the touch input to the second virtual machine ([0096]: “For example, based on identifying that the user input 705 corresponds to the at least one first visual object 601-1, the processor 120 of the first electronic device 101 may execute a function associated with the at least one first visual object 601-1 in the first virtual machine migrated from the second electronic device 102, and transmit information about the function executed in the first virtual machine, to the second electronic device 102”) The citation discloses in-response to the input, the first electronic device transmits the input to the first virtual machine/second virtual machine for execution. and the second virtual machine is configured to change the overlay based on the touch input and display the changed overlay on the second display. ([0093: “The screen 610 may include at least one first visual object 601-1, 601-2, 601-3, 601-4, 601-5 (which may be referred to hereinafter as 601-1 to 601-5) provided from a first virtual machine (e.g., the first virtual machine 310 of FIG. 3)” and [0095]: “This screen may be displayed on a display (e.g., the first display 240, the second display 245, or a combination thereof) of the second electronic device 102 illustrated in FIG. 2.” and [0099]: “For example, referring to FIG. 7, the processor 210 may display a screen 730 including an extended first visual object 601-2, based on the information about the function that is received from the first electronic device 101 in response to transmitting the information about the user input 705 to the first electronic device 101”) the citations disclose at [0099] that the visual object 601-2, which is provided by the first virtual machine/second virtual machine at [0093], is changed base on the user input, and the change is reflected on the screen 730, with the visual object 601-2 having at least partially altered visual element, and at [0095] discloses the screen maybe display on the first display 240. Regarding claim 17, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, and LEE, discloses the signal processing device of claim 16, and LEE further teaches wherein the second virtual machine is configured to transmit the changed overlay to the first virtual machine (LEE - [0099]: “For example, the input positioner 870 may identify that the user input 705 corresponds to the function associated with the at least one first visual object 601-1, and provide the identifying result to the processor 120…... Based on the identifying result, the processor 120 may execute the function in the first virtual machine or the second virtual machine, and transmit information about the executed function to the second electronic device 102 through operations exemplified through FIG. 5.”) the citation disclose the first virtual machine/second virtual machine, after executing the function regarding the user input to the visual object 601-1, transmits the executed function/changed overlay to the second electronic device/first virtual machine. and the first virtual machine is configured to transmit the changed overlay to the third virtual machine. ([0099]: “while displaying the screen on the first display 240, the second display 245, or a combination thereof, the processor 210 may receive the user input using the input device 230…… Based on the information about the function, the second electronic device 102 may alter a screen that is being displayed into another screen, or update the screen that is being displayed.”) The citation discloses the second electronic device 102/first virtual machine alter a screen that is being displayed into another screen/ transmit the changed overlay to the third virtual machine. Another screen could be the second display 245/third virtual machine. LEE teaches the second electronic device 102/first virtual machine updates/transmit the change overlay the displaying the screen on the first display 240, the second display 245, or a combination thereof, but does not explicitly teach the second display is operated by the third virtual machine. However, ROPER teaches the second display is operated by the third virtual machine (at FIG. 16 discloses the infotainment (front) VM 1603/third VM operates the display Navigation infotainment (front) 1611/second display). By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. Claims 3, 9, 10, are rejected under 35 U.S.C. 103 as being unpatentable over NARAYAN, ROPER, ATSMON and OSDevWiki, in further view of Momchilov US Pub. No. US 20120092277 A1 Regarding claim 3, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein the information regarding the touch input comprises coordinate information of the touch input. However, Momchilov teaches: wherein the information regarding the touch input comprises coordinate information of the touch input. ([0112]: “The notification may specify touch input being received at the client device for the remoted application and may include touch information event information such as coordinates of an initial touch location”) The citation discloses the touch information of the touch input comprises the coordinates of the initial touch location. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein the information regarding the touch input comprises coordinate information of the touch input, as taught in Momchilov’s invention into NARAYAN, ROPER, ATSMON and OSDevWiki’s invention because by incorporating coordinate information of the touch input, the system can ensures the precise touch tracking by knowing the exact location of user interactions on the displays, which can enhance touch gesture recognition, and more responsive touch processing in the vehicle’s display system. Regarding claim 9, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein the first virtual machine is configured to store coordinate information of the touch input in a shared memory. However, Momchilov teaches: wherein the first virtual machine is configured to store coordinate information of the touch input in a shared memory. ([0112]: In step 800, the server may receive a notification from a client device to which an application is being remoted. The notification may specify touch input being received at the client device for the remoted application and may include “touch information event information such as coordinates of an initial touch location ……… Upon receipt of the touch input event data, the server may further store the data in a shared memory space or location in step 810.”) The citation discloses the server/first virtual machine stores the data of the touch input event, which includes the coordinates of an initial touch location/coordinate information in the shared memory space. The teaching of LEE does not explicitly indicate that the first electronic device as the first virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein the first virtual machine is configured to store coordinate information of the touch input in a shared memory, as taught in Momchilov’s invention into NARAYAN, ROPER, ATSMON and OSDevWiki’s invention because by incorporating coordinate information of the touch input, the system can ensures the precise touch tracking by knowing the exact location of user interactions on the displays, which can enhance touch gesture recognition, and more responsive touch processing in the vehicle’s display system. Regarding claim 10, NARAYAN, in view of ROPER, ATSMON, OSDevWiki, and Momchilov, discloses the signal processing device of claim 9, and Momchilov further teaches: wherein the first virtual machine is configured to transmit a buffer index regarding the shared memory to the second virtual machine or the third virtual machine (Momchilov - [0112]: “the server may associate the shared memory space or location identifier with a private or undocumented address generated by the operating system of the client device” and Momchilov - [0113]: In step 820, the server may generate a local message in a specified format notifying the determined application of the received touch input event. In one or more examples, the client device or a digitizer executing on the server may create the notification message based on a touch or gesture notification message format such as WM_TOUCH or WM_GESTURE. The notification message may, in a particular example, duplicate or replay the WM_TOUCH or WM_GESTURE message generated by the client device, however replacing the undocumented address used by the client device to hold the touch input data with the server shared memory space or location identifier, which holds the transmitted touch input data at the server…… Upon creating the notification message, the server (or a remoting client executing thereon) may send the event notification and touch input information to a target application or window thereof in step 825”) The citation discloses at [0112] the server/first virtual machine associate the shared memory space with the undocumented address/buffer index, which is used to access the touch input data stored at the shared memory. At [0113], The server generates a notification message, which included a reference to the shared memory by replacing the undocumented address used by the client/second or third VM, with the server’s shared memory location, and then send to the application on the client. and the second virtual machine or the third virtual machine is configured to read the coordinate information of the touch input written in the shared memory based on the buffer index. ([0116] Accordingly, in step 845, the server may detect the application initiating a function call to extract touch input data once the application has processed the notification message…… the function call may be used to retrieve the touch event data.” and [0119] In some arrangements, the function executed in step 850 may further be configured to convert the retrieved touch input information into a public format recognizable by applications executing on the server. When, for example, the CTXGetGestureInfo( ) function is called by an application, the function may convert the touch input information from the undefined memory structure format (e.g., a system-specific format) into a public GESTUREINFO structure.” and [0120]: “After the touch input data is retrieved from the specified memory area and/or converted, a function such as the public CloseGestureInfoHandle( ) API may be called to close the resources associated with the gesture information handle in step 855.”) The citation discloses at [0116] in step 845 the application initiates a function call to extract the touch input data. At [0119], the server intercepts the function call and redirects the application to access the shared memory where the touch data is stored (the shared memory location was mapped using the undocumented address/buffer index in step 815. At [0120], the server replaces standard function call with custom ones to allow the application access the touch input stored in the shared memory. The teaching of LEE does not explicitly indicate that the first electronic device as the first virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over NARAYAN, ROPER, ATSMON and OSDevWiki, in further view of Luxenberg et al. US Pub. No. US 20140149490 A1 (hereafter Luxenberg) Regarding claim 11, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein the first virtual machine further comprises an input and output server interface, each of the second virtual machine and the third virtual machine comprises an input and output client interface, the input and output server interface in the first virtual machine is configured to store coordinate information of the touch input in a shared memory, and the input and output client interface in the second virtual machine or the third virtual machine is configured to read the coordinate information of the touch input written in the shared memory. However, Luxenberg teaches wherein the first virtual machine further comprises an input and output server interface (FIG. 1 and e.g. [0031]: “For example, VM 115 includes a virtual server 170 such as a virtual web server, a virtual data storage server, a virtual gaming server, a virtual enterprise application server, etc.”) The citation discloses the VM 115/first VM, which includes a virtual server 170/input and output server interface. each of the second virtual machine and the third virtual machine comprises an input and output client interface (FIG. 1 and e.g. [0033]: “For example, virtual machines 117-119 each include a virtual appliance 174-178. A virtual appliance 174-178 may be a virtual machine image file that includes a preconfigured operating system environment and a single application.”) The citation discloses the VM 117-119/second or third VM, which includes a virtual appliance /input and output client interface. a shared memory ([0037]: “The memory manager 160 may generate a shared memory, and may write a data packet (e.g., message) to a buffer of the shared memory. Memory manager 160 may then map the buffer in the shared memory to a virtual memory of a virtual appliance to virtually transmit the data packet stored in that buffer to the virtual appliance using zero-copy techniques.”) It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein the first virtual machine comprises an input and output server interface, each of the second virtual machine and the third virtual machine comprises an input and output client interface, as taught in Luxenberg’s invention into NARAYAN, ROPER, ATSMON and OSDevWiki’s invention because by providing the client interface and a shared memory that corresponding to the input and output data, it helps to enable efficient and low latency communication between the virtual machines, improve synchronizations, reduces processing delays, and ensures seamless interaction between the vehicle’s displays. However, Luxenberg does not explicitly teach the input and output server interface in the first virtual machine is configured to store coordinate information of the touch input in a shared memory, and the input and output client interface in the second virtual machine or the third virtual machine is configured to read the coordinate information of the touch input written in the shared memory. Momchilov teaches the input and output server interface in the first virtual machine is configured to store coordinate information of the touch input in a shared memory, ([0112]: In step 800, the server may receive a notification from a client device to which an application is being remoted. The notification may specify touch input being received at the client device for the remoted application and may include “touch information event information such as coordinates of an initial touch location ……… Upon receipt of the touch input event data, the server may further store the data in a shared memory space or location in step 810.”) The citation discloses the server/first virtual machine stores the data of the touch input event, which includes the coordinates of an initial touch location/coordinate information in the shared memory space. and the input and output client interface in the second virtual machine or the third virtual machine is configured to read the coordinate information of the touch input written in the shared memory. ([0116] Accordingly, in step 845, the server may detect the application initiating a function call to extract touch input data once the application has processed the notification message…… the function call may be used to retrieve the touch event data.” and [0119] In some arrangements, the function executed in step 850 may further be configured to convert the retrieved touch input information into a public format recognizable by applications executing on the server. When, for example, the CTXGetGestureInfo( ) function is called by an application, the function may convert the touch input information from the undefined memory structure format (e.g., a system-specific format) into a public GESTUREINFO structure.” and [0120]: “After the touch input data is retrieved from the specified memory area and/or converted, a function such as the public CloseGestureInfoHandle( ) API may be called to close the resources associated with the gesture information handle in step 855.”) The citation discloses at [0116] in step 845 the application initiates a function call to extract the touch input data. At [0119], the server intercepts the function call and redirects the application to access the shared memory where the touch data is stored (the shared memory location was mapped using the undocumented address/buffer index in step 815. At [0120], the server replaces standard function call with custom ones to allow the application access the touch input stored in the shared memory. Therefore, by combining the teaching of Luxenberg about the components input and output server, input and output client, and shared memory, with the teaching of Momchilov about the method/process of reading and writing coordinate information to the shared memory, one with the ordinary skills in the art would be able to come up with the claim invention. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the input and output server interface in the first virtual machine is configured to store coordinate information of the touch input in a shared memory, and the input and output client interface in the second virtual machine or the third virtual machine is configured to read the coordinate information of the touch input written in the shared memory, as taught in Momchilov’s invention into NARAYAN and ROPER’s invention because by storing coordinate information of the touch input in shared memory, the different VMs can access touch data efficiently, reduce communication latency, and enhance responsiveness. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over NARAYAN, ROPER, ATSMON and OSDevWiki, in further view of Samii et al. US Pub. No. US 20200117495 A1 (hereafter Samii) and LEE et al. US Pub. No. US 20200159562 A1 (hereafter LEE) Regarding claim 15, NARAYAN, in view of ROPER, ATSMON and OSDevWiki, discloses the signal processing device of claim 1, but does not explicitly teach wherein in response to the information regarding the touch input corresponding to operation of a hardware device in the vehicle while corresponding to at least one overlay from the third virtual machine, the first virtual machine is configured to transmit the information regarding the touch input to the third virtual machine and operate the hardware device in the vehicle based on the touch input. However, LEE teaches: wherein in response to the information regarding the touch input corresponding to operation of a hardware device in the vehicle while corresponding to at least one overlay from the third virtual machine (e.g. [0093]: “For example, referring to FIG. 6, the processor 210 may display a screen 610 on the display 555 (e.g., the first display 240) …… The screen 610 may include …... at least one second visual object 602-1, 602-2, 602-3, 602-4, 602-5 (which may be referred to hereinafter as 602-1 to 602-5) provided from a second virtual machine.” and e.g. [0096]: “For example, referring to FIG. 7, while displaying the screen 610, the processor 210 may ……. receive a user input 710 to the at least one second visual object 602-5.”) The citations disclose at [0093] the screen 610 displays the first display 240 and the second visual object 602-5/overlay provided by the second virtual machine/third virtual machine. At [0096], user input 710/touch input to the second visual object 602-5 is received. the first virtual machine is configured to transmit the information regarding the touch input to the third virtual machine. ([0096]: “For another example, based on identifying that the user input corresponds to the at least one second visual object 602-5, the processor 120 of the first electronic device 101 may execute a function associated with the at least one second visual object 602-5 in the second virtual machine, and transmit information about the function executed in the second virtual machine, to the second electronic device 102.”) The citation discloses in-response to the input, the first electronic device transmits the input to the second virtual machine/third virtual machine for execution. The teaching of LEE does not explicitly indicate that the first electronic device as the first virtual machine. However, ROPER discloses at FIG. 16 that the Service/Host VM 1601 is a virtual machine/first virtual machine. By combining the method from LEE with the components from ROPER, one with the ordinary skills in the art would be able to come up with the claim invention. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the wherein in response to the information regarding the touch input corresponding to operation of a hardware device in the vehicle while corresponding to at least one overlay from the third virtual machine, the first virtual machine is configured to transmit the information regarding the touch input to the third virtual machine, as taught in LEE’s invention into NARAYAN and ROPER’s invention because this feature helps to enhance the functionality of the vehicle’s user interface, enabling more efficiently control of hardware devices in the vehicle base on touch interaction, and maintain smooth cooperation between VMs. However, NARAYAN, in view of ROPER, ATSMON, OSDevWiki and LEE, fail to teach that the touch input corresponding to operation of a hardware device in the vehicle and operate the hardware device in the vehicle based on the touch input. Samii teaches to operation of a hardware device in the vehicle ([0041]: “The I/O devices (represented by dashed-block 150) include functions and/or features of the vehicle, which are managed and operated by one or more virtual machines of the the virtual platform 120 (e.g., the virtual platform 120 as executed by the connected compute center 110 support operations of the I/O devices by driving I/O across the backbone 101 and through the zone I/O controllers 140). In practice, the I/O devices include transducers that convert variations in a physical quantity, such as speed or pressure, into an electrical signal or vice versa. Further, the I/O devices can also be output devices such as lights, light emitting diodes, speakers.”) The citation discloses the I/O devices, such as lights/hardware device, are managed by one or more virtual machines of the virtual platform. operate the hardware device in the vehicle based on the touch input. ([0038]: “The lighting virtual machine 125 is an example of a virtual machine for managing/implementing lighting operations of the vehicle, along with associated I/O lighting devices (e.g., head lights, floor lights, fog lights, brake lights, dashboard lighting, and the like of a vehicle).”) The citation discloses a virtual machine that responsible for operate the lighting/hardware of the vehicle. It would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to add the touch input corresponding to operation of a hardware device in the vehicle and operate the hardware device in the vehicle based on the touch input, as taught in Samii’s invention into NARAYAN, ROPER, ATSMON, and LEE’s invention because this feature helps to enhance the functionality of the vehicle’s user interface, enabling more efficiently control of hardware devices in the vehicle base on touch interaction, and maintain smooth cooperation between VMs. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure: US 20190258251 A1: The present technology provides advanced systems and methods that facilitate autonomous driving functionality, including a platform for autonomous driving Levels 3, 4, and/or 5. In preferred embodiments, the technology provides an end-to-end platform with a flexible architecture, including an architecture for autonomous vehicles that leverages computer vision and known ADAS techniques, providing diversity and redundancy, and meeting functional safety standards. The technology provides for a faster, more reliable, safer, energy-efficient and space-efficient System-on-a-Chip, which may be integrated into a flexible, expandable platform that enables a wide-range of autonomous vehicle. US 20200326968 A1: Device security across multiple operating system modalities may include allocating, by a hypervisor, to a first virtual machine comprising a first operating system of a first modality, based on the first modality, a first one or more access privileges to one or more resources; and allocating, by the hypervisor, to a second virtual machine comprising a second operating system of a second modality, based on the second modality, a second one or more access privileges to the one or more resources. US 20160034295 A1: discloses a root VM includes various driver for providing partition management services, independent hardware vendor (IHV) drivers for managing interactions with host system hardware, and other drivers. The Root VM can operate as a parent partition and use HCIF to call hypervisor to create child VM partitions. Child VM partition can include a VMBus for communicating with root VM partition and other hypervisor-aware or enlightened partitions of hypervisor-hosted environment. Examiner has cited particular columns/paragraphs/sections and line numbers in the references applied and not relied upon to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. When responding to the Office action, applicant is advised to clearly point out the patentable novelty the claims present in view of the state of the art disclosed by the reference(s) cited or the objections made. A showing of how the amendments avoid such references or objections must also be present. See 37 C.F.R. 1.111(c). When responding to this Office action, applicant is advised to provide the line and page numbers in the application and/or reference(s) cited to assist in locating the appropriate paragraphs. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN M NGUYEN whose telephone number is (703)756-1599. The examiner can normally be reached Monday-Friday: 9:30am - 5:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Vital can be reached on (571) 272-4215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TUAN M NGUYEN/Examiner, Art Unit 2198 /PIERRE VITAL/Supervisory Patent Examiner, Art Unit 2198
Read full office action

Prosecution Timeline

Apr 20, 2022
Application Filed
Mar 07, 2025
Non-Final Rejection — §103, §112
Jun 17, 2025
Response Filed
Aug 02, 2025
Final Rejection — §103, §112
Nov 06, 2025
Request for Continued Examination
Nov 14, 2025
Response after Non-Final Action
Jan 04, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602253
Parallel Processing in Cloud
2y 5m to grant Granted Apr 14, 2026
Patent 12547467
METHOD TO OPTIMIZE STORAGE PARTITION REDISCOVERY
2y 5m to grant Granted Feb 10, 2026
Patent 12504999
LCS WORKLOAD IN-BAND SERVICE MANAGEMENT SYSTEM
2y 5m to grant Granted Dec 23, 2025
Patent 12493496
SYSTEM AND METHOD FOR ALLOCATION OF A SPECIALIZED WORKLOAD BASED ON AGGREGATION AND PARTITIONING INFORMATION
2y 5m to grant Granted Dec 09, 2025
Patent 12468570
ASYMMETRIC CENTRAL PROCESSING UNIT (CPU) SHARING WITH A CONTAINERIZED SERVICE HOSTED IN A DATA STORAGE SYSTEM
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+57.9%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 14 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month