Prosecution Insights
Last updated: April 19, 2026
Application No. 18/546,878

ON-BOARD DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM

Non-Final OA §103
Filed
Aug 17, 2023
Examiner
ANYA, CHARLES E
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Sumitomo Electric Industries, Ltd.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
727 granted / 891 resolved
+26.6% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
41 currently pending
Career history
932
Total Applications
across all art units

Statute-Specific Performance

§101
11.2%
-28.8% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
6.8%
-33.2% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 891 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-19 are pending in this application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0019170 A1 to Sugano in view of U.S. Pub. No. 2004/0221290 A1 to Casey et al. As to claim 1, Sugano teaches an on-board device comprising: a control unit (a possessor such as a CPU (Central Processing Unit)) that executes a plurality of programs (“…Here, the “electronic control unit” of the present disclosure is, for example, mainly configured by a semiconductor device. The “electronic control unit” of the present disclosure may be, for example, a so-called information processing device which has a possessor such as a CPU (Central Processing Unit) and a volatile storage unit such as a RAM (Random Access Memory). In this case, the information processing device may further include a nonvolatile storage unit such as a flash memory, a network interface unit connected to a communication network, or the like. In addition, such an information processing device may be a packaged semiconductor device or a configuration in which respective semiconductor devices are connected by wiring on a wiring board…” paragraph 0031); and a storage unit (nonvolatile storage unit) that stores therein a virtualization operating system (Operating System OS) that is to be started up by the control unit (“…Here, the “electronic control unit” of the present disclosure is, for example, mainly configured by a semiconductor device. The “electronic control unit” of the present disclosure may be, for example, a so-called information processing device which has a possessor such as a CPU (Central Processing Unit) and a volatile storage unit such as a RAM (Random Access Memory). In this case, the information processing device may further include a nonvolatile storage unit such as a flash memory, a network interface unit connected to a communication network, or the like. In addition, such an information processing device may be a packaged semiconductor device or a configuration in which respective semiconductor devices are connected by wiring on a wiring board…” paragraph 0031), wherein a plurality of virtual devices (First Virtual Machine 100/Second Virtual Machine 200) that serve as operation environments for the programs are generated by starting up the virtualization operating system (“…The configuration of the electronic control unit 10 of the first embodiment will be described with reference to FIG. 2. The “electronic control unit” of the present disclosure provides “a plurality of virtual machines” which are managed by a hypervisor. The electronic control unit 10 illustrated in FIG. 2 includes a first virtual machine 100 and a second virtual machine 200, a hypervisor 300 that manages these virtual machines, and a common memory 401 that is hardware, and a DMA controller 402. Each of the first and second virtual machines 100 and 200 has a guest OS installed on the virtual machine and a plurality of applications installed on the guest OS…” paragraphs 0030/0035/0036), the virtual device: includes a communication buffer to which an area divided from the storage unit is assigned, and to which communication data that is exchanged through communication between the plurality of virtual devices is to be written (Common Memory 401) (“…In the present embodiment, the configuration has been described in which the DMA controller 402 performs the DMA transfer of data to the common memory 401 by the DMA transfer both during the normal operation before the abnormality occurs in the specific virtual machine and after the occurrence of the abnormality. However, electronic control unit 10 may be configured to perform a PIO (Programmed I/O) transfer until an abnormality occurs in the specific virtual machine, and the DMA transfer is started when the abnormality occurs in the specific virtual machine. In either case, the DMA controller 402 performs the DMA transfer in which the data transmitted to the specific virtual machine whose CPU resource allocation has been stopped to the common memory…” paragraphs 0045/0060: NOTE: the (Direct Memory Access (DMA) transfer provides a technique for reading or writing data into or out of the common memory); and writes the communication data addressed to another virtual device (Slaves ECU 4-1/4-2/First Virtual Machine 100/ Second Virtual Machine 200), to the communication buffer when communicating with the other virtual device (Common Memory 401) (“…According to the first embodiment, while the allocation of the CPU resources to the first virtual machine 100 is stopped, the data transmitted to the first virtual machine 100 can be recorded in the common memory 401 by the DMA transfer. However, since the common memory 401 is provided in the non-secure area, the data written in the common memory 401 may be tampered with or deleted by a cyber attack or the like…” paragraph 0060: NOTE: the (Direct Memory Access (DMA) transfer provides a technique for reading or writing data to specific/particular virtual machines via the common memory), a management unit (Master ECU 3) that manages the plurality of virtual devices (Slaves ECU 4-1/4-2/First Virtual Machine 100/ Second Virtual Machine 200). Sugano is silent with reference to writes the communication data addressed to another virtual device, to the virtual device's own communication buffer when communicating with the other virtual device and a management unit that manages the plurality of virtual devices: reads out the communication data written to the communication buffer of the virtual device, during a period in which the virtual device is not operating; writes the communication data thus read out, to a storage area that is accessible to the management unit; and writes the communication data written to the storage area, to the communication buffer of the other virtual device, during a period in which the other virtual device that is a transmission destination of the communication data is not operating, and the other virtual device reads out the communication data transmitted from the virtual device, written to the other virtual device's own communication buffer. Casey teaches writes the communication data addressed to another virtual device, to the virtual device's own communication buffer (Work Queues 52/54/56) writes the communication data addressed to another virtual device, to the virtual device's own communication buffer when communicating with the other virtual device when communicating with the other virtual device (“…In the preferred embodiment of the present invention, each WQAF is programmed to add a work item only to the work queue dedicated to its virtual machine and its clones, and each scheduler is programmed to remove work items only from the work queue dedicated to its virtual machine and its clones. Work queue 52 is dedicated to virtual machine 12 and its clones, work queue 54 is dedicated to virtual machine 14 and its clones, and work queue 56 is dedicated to virtual machine 16 and its clones…” paragraph 0021) and a management unit (Work Queue Assignment Functions(WQAFs) 62, 64 and 66/Resource Manager 212) that manages the plurality of virtual devices (VMs 12/14/16): reads out the communication data written to the communication buffer of the virtual device (each virtual machine can directly access the shared memory 25/VM 14 Holds Lock 91), during a period in which the virtual device is not operating (VM 12 Waits for lock 92/VM 16 Waits for lock 93); writes the communication data thus read out, to a storage area that is accessible to the management unit device (each virtual machine can directly access the shared memory 25); and writes the communication data written to the storage area (Shared Memory 25), to the communication buffer of the other virtual device, during a period in which the other virtual device that is a transmission destination of the communication data is not operating, and the other virtual device reads out the communication data transmitted from the virtual device, written to the other virtual device's own communication buffer (Lock 90/a synchronization or lock structure generally designated 90) (“…Computer 10 also includes a memory area 25 which is shared by all of the virtual machines 12, 14 and 16. Being "shared" each virtual machine can directly access the shared memory 25 and the data and data structures (including lock structures) stored in the shared memory by appropriate address, when it knows the address. The work queues 52, 54 and 56 for the WQAFs 62, 64 and 66 and respective schedulers 42, 44 and 46 are located in shared memory (even though the WQAFs and schedulers are all in the private memory of the respective virtual machines). Consequently, each WQAF can access all the work queues to add a work item to any of the work queues, when it knows the address of the work queues. In the preferred embodiment of the present invention, each WQAF is programmed to add a work item only to the work queue dedicated to its virtual machine and its clones, and each scheduler is programmed to remove work items only from the work queue dedicated to its virtual machine and its clones. Work queue 52 is dedicated to virtual machine 12 and its clones, work queue 54 is dedicated to virtual machine 14 and its clones, and work queue 56 is dedicated to virtual machine 16 and its clones…To collectively utilize the virtual resources of virtual machine 12 and its virtual machine clone 12A, the resource manager 212 grants to the virtual machine clone 12A access to work queue 52 (step 89). This access is "granted" by the resource manager 212 furnishing to the virtual machine clone 12A an authorization to access a portion or segment of the shared memory containing the work queue 52 of virtual machine 12. The beginning of the shared memory segment may contain the address of the shared work queue 52 and control block 58, or the resource manager can provide these addresses separately to the WQAF 62A and the scheduler 42A. The shared access by virtual machines 12 and 12A to work queue 52 also requires possession of a lock 90 described in more detail below with reference to FIGS. 3 and 4…FIG. 3 figuratively illustrates a synchronization or lock structure generally designated 90 within the shared memory 25 of computer system 10. A lock is required for any work queue which is shared by more than one virtual machine. This will be the case when a virtual machine has one or more clones which share a work queue, such as work queue 52 shared by virtual machines 12, 12A and 12B illustrated in FIG. 2. When there are no clones for a virtual machine, then the lock structure can be bypassed or the virtual machine can continuously hold the lock. (FIG. 3 does not illustrate virtual machines 14 or 16 or their work queues 54 or 56, respectively.) In the illustrated example, virtual machine 12A holds lock 91, virtual machine 12 has a place holder 92 waiting for the lock from virtual machine 12A, and virtual machine 12B has a place holder 93 waiting for the lock from virtual machine 12. This is actually recorded in control block 58 which indicates that virtual machine 12A holds the lock and virtual machines 12 and 12B are currently waiting for the lock. The "waiter list" 95 of control block 58 indicates the order of the waiters, i.e. virtual machine 12 is first in line waiting for the lock and virtual machine 12B will attempt to obtain the lock after virtual machine 12 obtains the lock. In the example, virtual machine 12A holds lock 91 exclusively, that is, no other virtual machine may concurrently hold this lock. Virtual machine 12 and 12B are waiting for the lock and willing to hold the lock shared, that is, they may concurrently hold the lock with each other…” paragraphs 0021/0026/0033). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Sugano with the teaching of Casey because the teaching of Casey would improve the system of Sugano by providing a synchronization or lock structure for optimally controlling and managing the use or access of computing resources. As to claim 2, Sugano teaches the on-board device according to claim 1, wherein the control unit includes a plurality of computation devices, one computation device of the plurality of computation devices functions as the management unit (Master ECU 3), and the other computation devices of the plurality of computation devices function as the virtual devices (Slaves ECU 4-1/4-2/First Virtual Machine 100/ Second Virtual Machine 200). Claims 5 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0019170 A1 to Sugano in view of U.S. Pub. No. 2004/0221290 A1 to Casey et al. as applied to claim 1 above, and further in view of U.S. Pub. No. 2018/0004680 A1 to Elzur. As to claim 5, Sugano as modified by Casey teaches the on-board device according to claim 1, however it is silent with reference to wherein the communication buffer includes a reception buffer to which the communication data to be received by the virtual device is written, the virtual device: reads out the communication data written to the reception buffer if a reception buffer update flag indicating that the communication data written to the reception buffer has been updated is ON; and switches the reception buffer update flag from ON to OFF after reading out the communication data. Elzur teaches wherein the communication buffer includes a reception buffer to which the communication data to be received by the virtual device is written (one or more shared memory blocks), the virtual device: reads out the communication data written to the reception buffer if a reception buffer update flag indicating that the communication data written to the reception buffer has been updated is ON (the sender virtual machine triggers a platform entity that cannot be tampered with by the sender virtual machine, such as a specialized processor function, to set one or more page table flags for the shared memory block, denying write access to the sender entity and granting read access to the receiver virtual machine(s), and then notifies the receiver virtual machine(s) that data is available); and switches the reception buffer update flag from ON to OFF (to clear the page table flags on the shared memory block) after reading out the communication data (“…Referring now to FIG. 1, an illustrative computing device 100 for zero-copy inter-virtual-machine communication includes a processor 120, an I/O subsystem 128, a memory 130, and in some embodiments a data storage device 132. In use, as described below, the computing device 100 is configured to support zero-copy data movement between two or more virtual machines (and/or other containers when relevant). In particular, the computing device 100, assisted by a platform management entity such as a hypervisor or an element in it, e.g., a virtual machine, allocates one or more shared memory blocks to be used to exchange data between virtual machines. A sender virtual machine with read-write access to a shared memory block writes data to the shared memory block, and then switches to a protected memory view without generating a virtual machine (VM) exit. From protected code, the sender virtual machine triggers a platform entity that cannot be tampered with by the sender virtual machine, such as a specialized processor function, to set one or more page table flags for the shared memory block, denying write access to the sender entity and granting read access to the receiver virtual machine(s), and then notifies the receiver virtual machine(s) that data is available. After setting the page table flags, the sender virtual machine is prohibited from writing to the shared memory block, which may be enforced by the processor page tables, for example. In response to the notification, the receiver virtual machine can access, with read-only access rights, the shared memory block to process the data. After processing the data, the receiver virtual machine switches to a protected memory view without generating a VM exit, triggers the platform entity, which is also not accessible for tampering by the receiver entity, to clear the page table flags on the shared memory block, and notifies the sender virtual machine that the send/receive operation is complete. Thus, the computing device 100 may allow for efficient inter-virtual machine communication without requiring an additional copy of the message data and without generating additional VM exits. Additionally, because the shared memory block is protected from being written by the sender virtual machine (either through a programming error or maliciously) after a send operation is initiated, the computing device 100 may facilitate copy-on-write usage of the exchanged data…” paragraph 0014). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Sugano and Casey with the teaching of Elzur because the teaching of Elzur would improve the system of Sugano and Casey by providing an access right control system for permitting users or a computer application the rights to read, write, modify, delete or otherwise access a computer file, change configurations, settings, add or remove applications. As to claim 11, see the rejection of claim 5 above. Claims 8-10 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0019170 A1 to Sugano in view of U.S. Pub. No. 2004/0221290 A1 to Casey et al. as applied to claim 1 above, and further in view of U.S. Pub. No. 2005/0086237 A1 to Monnie et al. As to claim 8, Sugano as modified by Casey teaches the on-board device according to claim 1, however it is silent with reference to wherein the communication buffer includes a program execution storage area used to execute the programs, and the management unit writes the communication data written to the storage area, to the program execution storage area of the virtual device at a transmission destination. Monnie teaches wherein the communication buffer includes a program execution storage area used to execute the programs (its local memory), and the management unit writes the communication data (Object 128/Shared Object 310) written to the storage area (Shared Name Space 116/Shared Object Space 300), to the program execution storage area of the virtual device (Application B/Application 302/306) at a transmission destination (“…Referring to FIGS. 6 and 7, the system 101 may support two methods of sharing objects, copy sharing and direct sharing. An object 128 that is copy shared is allocated twice, once in the local memory of an application and again in shared memory 100. FIG. 6 shows an example of a copy sharing method where application A creates an object 128 in its local address space. The object 128 is shared by putting it into the shared name space 116. At this point, the object 128 is not immediately written to a field in the shared object space 100; instead the object 128 is written to shared memory 100 only after a user "flushes", i.e. updates the object to a new version, for the first time. Once an object 128 is written to shared memory, application B is able to copy the object 128 to its local memory by accessing the shared object 128 by name in the shared name space 116. Application B may modify the copy of the object 128 obtained from shared memory, and may also flush to update the shared memory copy of the object. If application B wishes to see the most recently updated version of the object 128, a refresh command may be used…” paragraph 0067). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Sugano and Casey with the teaching of Monnie because the teaching of Monnie would improve the system of Sugano and Casey by providing a technique for allocating queues for specific use or specific application and thus reduces or eliminates memory/buffer access contention or race condition. As to claim 9, Sugano teaches an information processing method comprising: generating a plurality of virtual devices ((First Virtual Machine 100/Second Virtual Machine 200) that serve as operation environments for a plurality of programs, in an on-board device (“…The configuration of the electronic control unit 10 of the first embodiment will be described with reference to FIG. 2. The “electronic control unit” of the present disclosure provides “a plurality of virtual machines” which are managed by a hypervisor. The electronic control unit 10 illustrated in FIG. 2 includes a first virtual machine 100 and a second virtual machine 200, a hypervisor 300 that manages these virtual machines, and a common memory 401 that is hardware, and a DMA controller 402. Each of the first and second virtual machines 100 and 200 has a guest OS installed on the virtual machine and a plurality of applications installed on the guest OS…” paragraphs 0030/0035/0036); when each virtual device communicates with another virtual device, writing communication data addressed to the other virtual device, to a communication buffer that is included in the virtual device and to which the communication data that is exchanged through communication between the plurality of virtual devices is to be written (Common Memory 401) (“…In the present embodiment, the configuration has been described in which the DMA controller 402 performs the DMA transfer of data to the common memory 401 by the DMA transfer both during the normal operation before the abnormality occurs in the specific virtual machine and after the occurrence of the abnormality. However, electronic control unit 10 may be configured to perform a PIO (Programmed I/O) transfer until an abnormality occurs in the specific virtual machine, and the DMA transfer is started when the abnormality occurs in the specific virtual machine. In either case, the DMA controller 402 performs the DMA transfer in which the data transmitted to the specific virtual machine whose CPU resource allocation has been stopped to the common memory…” paragraphs 0045/0060), reading out the communication data transmitted from the virtual device, written to the communication buffer of the other virtual device (Step S202) and writing the communication data thus read out, to a storage area (Save Memory 403) (“…The save execution module 202 executes a process of saving the data recorded in the common memory 401 to the save memory 403 “in response to” the save request from the monitoring module 201. Specifically, the save execution module 202 reads out the data recorded in the common memory 401 in step S202. Then, the save execution module 202 “saves” the read data into the save memory 403 in step S203. Similar to the writing of data to the common memory 401, the saving of data from the common memory 401 to the save memory 403 is performed via a driver and the memory controller, but it is omitted in this description…” paragraph 0068). Sugano is silent with reference to reading out the communication data written to the communication buffer of the virtual device, during a period in which the virtual device is not operating; writing the communication data thus read out, to a storage area, writing the communication data written to the storage area, to the communication buffer of the other virtual device, during a period in which the other virtual device that is a transmission destination of the communication data is not operating, and Casey teaches reading out the communication data written to the communication buffer of the virtual device, during a period in which the virtual device is not operating (each virtual machine can directly access the shared memory 25/VM 14 Holds Lock 91), and reading out the communication data transmitted from the virtual device, written to the communication buffer of the other virtual device (Lock 90/a synchronization or lock structure generally designated 90) (“…Computer 10 also includes a memory area 25 which is shared by all of the virtual machines 12, 14 and 16. Being "shared" each virtual machine can directly access the shared memory 25 and the data and data structures (including lock structures) stored in the shared memory by appropriate address, when it knows the address. The work queues 52, 54 and 56 for the WQAFs 62, 64 and 66 and respective schedulers 42, 44 and 46 are located in shared memory (even though the WQAFs and schedulers are all in the private memory of the respective virtual machines). Consequently, each WQAF can access all the work queues to add a work item to any of the work queues, when it knows the address of the work queues. In the preferred embodiment of the present invention, each WQAF is programmed to add a work item only to the work queue dedicated to its virtual machine and its clones, and each scheduler is programmed to remove work items only from the work queue dedicated to its virtual machine and its clones. Work queue 52 is dedicated to virtual machine 12 and its clones, work queue 54 is dedicated to virtual machine 14 and its clones, and work queue 56 is dedicated to virtual machine 16 and its clones…To collectively utilize the virtual resources of virtual machine 12 and its virtual machine clone 12A, the resource manager 212 grants to the virtual machine clone 12A access to work queue 52 (step 89). This access is "granted" by the resource manager 212 furnishing to the virtual machine clone 12A an authorization to access a portion or segment of the shared memory containing the work queue 52 of virtual machine 12. The beginning of the shared memory segment may contain the address of the shared work queue 52 and control block 58, or the resource manager can provide these addresses separately to the WQAF 62A and the scheduler 42A. The shared access by virtual machines 12 and 12A to work queue 52 also requires possession of a lock 90 described in more detail below with reference to FIGS. 3 and 4…FIG. 3 figuratively illustrates a synchronization or lock structure generally designated 90 within the shared memory 25 of computer system 10. A lock is required for any work queue which is shared by more than one virtual machine. This will be the case when a virtual machine has one or more clones which share a work queue, such as work queue 52 shared by virtual machines 12, 12A and 12B illustrated in FIG. 2. When there are no clones for a virtual machine, then the lock structure can be bypassed or the virtual machine can continuously hold the lock. (FIG. 3 does not illustrate virtual machines 14 or 16 or their work queues 54 or 56, respectively.) In the illustrated example, virtual machine 12A holds lock 91, virtual machine 12 has a place holder 92 waiting for the lock from virtual machine 12A, and virtual machine 12B has a place holder 93 waiting for the lock from virtual machine 12. This is actually recorded in control block 58 which indicates that virtual machine 12A holds the lock and virtual machines 12 and 12B are currently waiting for the lock. The "waiter list" 95 of control block 58 indicates the order of the waiters, i.e. virtual machine 12 is first in line waiting for the lock and virtual machine 12B will attempt to obtain the lock after virtual machine 12 obtains the lock. In the example, virtual machine 12A holds lock 91 exclusively, that is, no other virtual machine may concurrently hold this lock. Virtual machine 12 and 12B are waiting for the lock and willing to hold the lock shared, that is, they may concurrently hold the lock with each other…” paragraphs 0021/0026/0033). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Sugano with the teaching of Casey because the teaching of Casey would improve the system of Sugano by providing a synchronization or lock structure for optimally controlling and managing the use or access of computing resources. Monnie teaches writing the communication data (Object 128/Shared Object 310) written to the storage area (Shared Name Space 116/Shared Object Space 300), to the communication buffer (its local memory) of the other virtual device (Application B/Application 302/306) during a period in which the other virtual device that is a transmission destination of the communication data is not operating (Lock Info Field 322) (“…Referring to FIGS. 6 and 7, the system 101 may support two methods of sharing objects, copy sharing and direct sharing. An object 128 that is copy shared is allocated twice, once in the local memory of an application and again in shared memory 100. FIG. 6 shows an example of a copy sharing method where application A creates an object 128 in its local address space. The object 128 is shared by putting it into the shared name space 116. At this point, the object 128 is not immediately written to a field in the shared object space 100; instead the object 128 is written to shared memory 100 only after a user "flushes", i.e. updates the object to a new version, for the first time. Once an object 128 is written to shared memory, application B is able to copy the object 128 to its local memory by accessing the shared object 128 by name in the shared name space 116. Application B may modify the copy of the object 128 obtained from shared memory, and may also flush to update the shared memory copy of the object. If application B wishes to see the most recently updated version of the object 128, a refresh command may be used…As stated previously, when multiple threads from multiple applications are sharing objects in a shared object space, access to the objects should be synchronized. FIG. 10A illustrates this necessity. In this figure, application 302 is running in a VM 304 while application 306 is running in another VM 308. Both VM 304 and VM 308 are sharing the shared object space 300 that is storing a shared object 310. In this figure, application 302 is simultaneously running two threads, thread 312 and thread 314, both of which are calling shared object 310. At the same time, application 308 is running a thread 316 which also is calling the shared object 310. Absent synchronization, each thread 312, 314, and 316 could all access the shared object and make simultaneous, possibly conflicting changes to the object 310 without any knowledge of each other's changes…Also, as stated previously, existing methods provide for synchronization of multiple threads in a single application accessing a shared object in a single VM. Extended synchronization techniques, as described herein, may be used to provide synchronization for concurrent access to objects in shared memory by multiple applications illustrated in FIG. 10B…In FIG. 10B, a shared object space 300 stores objects O, P, Q, and R. Each of these objects includes an object header 320. Each object header 320 may include a lock info field 322 which is used to indicate whether a thread of an application has locked that object. The lock info field 322 may contain either a "cheap lock" or a reference to a "lock node", i.e. an "expensive lock." A "cheap lock" directly encodes the identity of the application and the thread that owns the lock by inserting a value into the lock info field 322 unique to the thread of that application. (The value "0" may be used to indicate that an object is not locked by any thread of any application.) When thread A of application 326 seeks to acquire object O, for example, thread A checks the header of object O to test whether the value in the lock info field 322 is "0", representing that it is not locked. If the value is "0" then thread A 324 substitutes its unique number in the header of object O and acquires the lock with a "cheap lock."…” paragraphs 0067/0071-0073). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Sugano and Casey with the teaching of Monnie because the teaching of Monnie would improve the system of Sugano and Casey by providing a technique for allocating queues for specific use or specific application and thus reduces or eliminates memory/buffer access contention or race condition. As to claim 10, see the rejection of claim 9 above. As to claim 14, see the rejection of claim 8 above. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2021/0019170 A1 to Sugano in view of U.S. Pub. No. 2004/0221290 A1 to Casey et al. and further in view of U.S. Pub. No. 2018/0004680 A1 to Elzur as applied to claim 5 above, and further in view of U.S. Pub. No. 2005/0086237 A1 to Monnie et al. As to claim 17, Sugano as modified by Casey and Elzur teaches the on-board device according to claim 5, however it is silent with reference to wherein the communication buffer includes a program execution storage area used to execute the programs, and the management unit writes the communication data written to the storage area, to the program execution storage area of the virtual device at a transmission destination. Monnie teaches wherein the communication buffer includes a program execution storage area used to execute the programs (its local memory), and the management unit writes the communication data (Object 128/Shared Object 310) written to the storage area (Shared Name Space 116/Shared Object Space 300), to the program execution storage area of the virtual device (Application B/Application 302/306) at a transmission destination (“…Referring to FIGS. 6 and 7, the system 101 may support two methods of sharing objects, copy sharing and direct sharing. An object 128 that is copy shared is allocated twice, once in the local memory of an application and again in shared memory 100. FIG. 6 shows an example of a copy sharing method where application A creates an object 128 in its local address space. The object 128 is shared by putting it into the shared name space 116. At this point, the object 128 is not immediately written to a field in the shared object space 100; instead the object 128 is written to shared memory 100 only after a user "flushes", i.e. updates the object to a new version, for the first time. Once an object 128 is written to shared memory, application B is able to copy the object 128 to its local memory by accessing the shared object 128 by name in the shared name space 116. Application B may modify the copy of the object 128 obtained from shared memory, and may also flush to update the shared memory copy of the object. If application B wishes to see the most recently updated version of the object 128, a refresh command may be used…” paragraph 0067). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claim invention to modify the system of Sugano, Casey and Elzur with the teaching of Monnie because the teaching of Monnie would improve the system of Sugano, Casey and Elzur by providing a technique for allocating queues for specific use or specific application and thus reduces or eliminates memory/buffer access contention or race condition. Allowable Subject Matter Claims 3, 4, 6, 7, 12, 13, 15, 16, 18 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Reasons for Allowance The following is an examiner’s statement of reasons for allowance: The closest prior art of records, (U.S. Pub. No. 2021/0019170 A1 to Sugano, U.S. Pub. No. 2004/0221290 A1 to Casey et al. and U.S. Pub. No. 2005/0086237 A1 to Monnie et al.), taken alone or in combination do not specifically disclose or suggest the claimed recitations (claims 3, 4, 6, 7, 12, 13, 15, 16, 18 and 19), when taken in the context of claims as a whole. Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. WO 2020261519 A1 to Katayama et al. and directed to an in-vehicle system in which a plurality of electronic control units (ECUs: Electronic Control Units) are connected to a network. U.S. Pat. No. 8,281,303 B2 issued to Kani and directed to systems and methods for efficient dynamic allocation of virtual machine resources. C.N. No. 112153116 A to Xiao et al. and directed to a data sharing method of central computing platform multi-virtual machine based on vehicle Ethernet communication, comprising: virtual machine management layer on the hardware layer; a plurality of virtual machines located in the virtual machine management layer. U.S. Pub. No. 2021/0051083 A1 to Gowan and directed to a packet inspection service implemented on a host device that includes a virtualization system using a virtual network device to host multiple virtual network functions of a network service chain implemented as a hypervisor-based system. U.S. Pub. No. 2022/0137995 A1 to Kashtan et al. and directed to a method including identifying a real-time clock device of a host computing device and the host computing device comprises a hypervisor and a virtual machine. WO 2019188233 A1 to IIda et al. and directed to inter core communication technology for performing parallel processing with plural cores in a vehicle. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES E ANYA whose telephone number is (571)272-3757. The examiner can normally be reached Mon-Fir. 9-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KEVIN YOUNG can be reached at 571-270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES E ANYA/Primary Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Nov 27, 2025
Non-Final Rejection — §103
Feb 16, 2026
Interview Requested
Mar 11, 2026
Examiner Interview Summary
Mar 11, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591471
KNOWLEDGE GRAPH REPRESENTATION OF CHANGES BETWEEN DIFFERENT VERSIONS OF APPLICATION PROGRAMMING INTERFACES
2y 5m to grant Granted Mar 31, 2026
Patent 12591455
PARAMETER-BASED ADAPTIVE SCHEDULING OF JOBS
2y 5m to grant Granted Mar 31, 2026
Patent 12585510
METHOD AND SYSTEM FOR AUTOMATED EVENT MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579014
METHOD AND A SYSTEM FOR PROCESSING USER EVENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572393
CONTAINER CROSS-CLUSTER CAPACITY SCALING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+33.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 891 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month