DETAILED ACTION
Claims 1-20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/13/2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-9, 12-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Leach et al. (US 6,108,715) in further view of Kim et al. (US 2019/0129747 A1).
Regarding claim 1, Leach teaches the invention substantially as claimed including a
system (Fig. 3, computer system), comprising:
at least one processor (Fig. 3 Central Processing Unit 350); and
a memory, storing program instructions that when executed by the at least one processor cause the at least one processor to implement a thread manager (Col. 5 line 65 through Col. 6, line 3: The physical address space contains an operating system kernel ("kernel") 322 and an application program 324. As is well known to those skilled in the art, computer programs can be loaded (e.g., from secondary memory) into the physical address space as the computer programs are needed.; Col. 8, lines 24-41: kernel manages threads), the thread manager configured to:
responsive to one or more requests from a process of an application for termination of respective threads of the process, place respective thread-specific data structures for the one or more threads of the process into a thread cache external to the application, wherein, subsequent to the placing, the thread cache comprises thread-specific data structures of standby threads, including the one or more threads local to the process, in respective unscheduled states (Fig. 6, Client address space 630; Server Address Space; Thread Pool 615 shown external to the client address space; Col. 2, line 43 through Col. 3, line 12: the process that remotely invokes the implemented function members is the client process.; Col. 8, lines 24-41: The above-described table entries are created when a server process registers with the server. This registration allows the server process to be accessible for remote execution. In a preferred embodiment, the server process issues a system call to request registration as a server. The kernel responds to this request by generating a thread pool 615 for the server process. The thread pool contains a collection of thread control blocks. Each thread control block contains thread state information (e.g., an instruction pointer, a stack pointer, register values) for a respective thread of execution of the server process. In its entirety, the thread pool identifies the total number of threads of execution that may be concurrently executing in the server process. The thread pool also serves as a thread cache. That is, when a thread of execution is completed, the kernel caches the thread control block in the thread pool. In this manner, the cached thread control block can be retrieved for a subsequent remote procedure call.);
receive, subsequent to placing the respective one or more threads of the process into the process-local thread cache, a request to create a thread of the process for which the process-local thread cache is maintained, and responsive to receiving the request (Col. 8, lines 37-45: That is, when a thread of execution is completed, the kernel caches the thread control block in the thread pool. In this manner, the cached thread control block can be retrieved for a subsequent remote procedure call. By retrieving a thread control block, the kernel can quickly access the stack that is referenced by the thread control block: as will be explained in more detail below, the thread control block references a server stack that is mapped into the kernel's address space.; Col. 15, lines 3-6: The client process begins by calling the proxy Edit method (step 1102). The proxy method loads a register with a virtual table index and traps to the Kernel-Call Routine (steps 1104 and 1106).; Col. 15, lines 14-20: Using the server identifier contained within the resource table entry, the routine creates, in the previously allocated thread pool, a server thread for the real object method (steps 1114 and 1116). Alternatively, for cases where a previously-created thread is available, the routine will retrieve the available thread from the thread pool.):
create the thread of the process using a standby thread of the one or more standby threads of the process-local thread cache (Col. 9, line 60 through Col. 10, line 7: The kernel then creates a server thread for, or retrieves a server thread from, the previously allocated thread pool 615. Each server thread has its own server stack 650. When creating a server thread, the kernel initializes a server thread structure 667 for the real method 622. The server thread structure contains register values (e.g., stack and instruction pointer values) for the created thread. After creating the server thread, the kernel maps the server thread stack 650 into the kernel's address space 660. By mapping the server stack 668 to the kernel's address space while the client process is mapped into the application program's protected address space, the kernel can copy data directly between the client stack 641 and the server stack; thus, avoiding the processing overhead of conventional marshalling/unmarshalling techniques.); and
place the created thread in a scheduled state (Col. 15, lines 19-21: Alternatively, for cases where a previously-created thread is available, the routine will retrieve the available thread from the thread pool. Subsequently, the routine maps the server thread stack into the kernel's address space (step 1118).).
While Leach teaches completion of a thread execution process, Leach does not explicitly teach one or more requests from a process of an application for termination of respective threads.
However, Kim teaches one or more requests from a process of an application for termination of respective threads ([0004]; [0025] For example, it was described herein that the cloud-based management system 208 can create one or more processing threads and add them to the elastic scalable thread pool 212, and/or destroy one or more processing threads(i.e., request for thread termination) and remove them from the elastic scalable thread pool 212, thereby assuring that a respective processing thread in the thread pool 212 is free and available to process each respective one of the events 0, 1, . . . , M in the task queue 210. In certain alternative embodiments, to avoid a possible undesirable amount of processing overhead associated with the creation and/or destruction of processing threads, the cloud-based management system 208 can be configured to monitor the elastic scalable thread pool 212 to determine an availability of the respective processing threads 0, 1, . . . , M, and reuse one or more of the processing threads 0, 1, . . . , M for multiple event processing operations based on their availability, rather than creating processing threads as the event tasks are received. In certain further alternative embodiments, rather than destroying processing threads and removing them from the elastic scalable thread pool 212, the cloud-based management system 208 can be configured to at least temporarily terminate such processing threads or place them in a sleep mode, without removing the processing threads from the thread pool 212.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of requesting thread termination and rather than destroying the thread, store it in a thread pool as taught by Kim with the teachings of Leach of upon completion of a thread execution storing a thread control block in a thread cache/pool for later reuse. The modification would have been motivated by the desire of optimizing response times by using previously created threads rather than creating new ones to serve future requests.
Regarding claim 2, Leach teaches wherein the one or more standby threads are maintained in the thread cache for the process (Col. 8, lines 24-41: The thread pool contains a collection of thread control blocks. Each thread control block contains thread state information (e.g., an instruction pointer, a stack pointer, register values) for a respective thread of execution of the server process. In its entirety, the thread pool identifies the total number of threads of execution that may be concurrently executing in the server process. The thread pool also serves as a thread cache. That is, when a thread of execution is completed, the kernel caches the thread control block in the thread pool. In this manner, the cached thread control block can be retrieved for a subsequent remote procedure call.).
Regarding claim 3, Leach teaches wherein to create the thread of the process, the thread manager is configured to create the thread based at least in part on the thread-specific data structures of the standby thread and place the thread in a scheduled state (Col. 8, lines 24-41: The thread pool contains a collection of thread control blocks. Each thread control block contains thread state information (e.g., an instruction pointer, a stack pointer, register values) for a respective thread of execution of the server process. In its entirety, the thread pool identifies the total number of threads of execution that may be concurrently executing in the server process. The thread pool also serves as a thread cache. That is, when a thread of execution is completed, the kernel caches the thread control block in the thread pool. In this manner, the cached thread control block can be retrieved for a subsequent remote procedure call.; Col. 9, line 58 through Col. 10 line 7: After determining the resource table entry 665 for the real object 620, the kernel retrieves the process identifier from the resource table entry. The kernel then creates a server thread for, or retrieves a server thread from, the previously allocated thread pool 615. Each server thread has its own server stack 650. When creating a server thread, the kernel initializes a server thread structure 667 for the real method 622. The server thread structure contains register values (e.g., stack and instruction pointer values) for the created thread. After creating the server thread, the kernel maps the server thread stack 650 into the kernel's address space 660. By mapping the server stack 668 to the kernel's address space while the client process is mapped into the application program's protected address space, the kernel can copy data directly between the client stack 641 and the server stack; thus, avoiding the processing overhead of conventional marshalling/unmarshalling techniques.).
Regarding claim 5, similar to claim 1, the combination further teaches wherein the thread manager is further configured to:
receive a request to terminate another thread of the process (Leach’s Col. 2, line 43 through Col. 3, line 12; Col. 8, lines 24-41; Kim’s [0004]; [0025]);
retain the other thread of the process as another standby thread of the one or more standby threads (Leach’s Col. 8, lines 24-41).
Regarding claim 6, Leach teaches wherein the other thread comprises a kernel- mode data structure, and wherein retaining the other thread comprises retaining the kernel- mode data structure (Col. 8, lines 24-41: The thread pool contains a collection of thread control blocks. Each thread control block contains thread state information (e.g., an instruction pointer, a stack pointer, register values) for a respective thread of execution of the server process… That is, when a thread of execution is completed, the kernel caches the thread control block in the thread pool. In this manner, the cached thread control block can be retrieved for a subsequent remote procedure call.).
Regarding claim 7, it is a method type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above.
Regarding claim 8, it is a method type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above.
Regarding claim 9, it is a method type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above.
Regarding claim 12, it is a method type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above.
Regarding claim 13, it is a method type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above.
Regarding claim 14, it is a media/product type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale above.
Regarding claim 15, it is a media/product type claim having similar limitations as claim 2 above. Therefore, it is rejected under the same rationale above.
Regarding claim 16, it is a media/product type claim having similar limitations as claim 3 above. Therefore, it is rejected under the same rationale above.
Regarding claim 19, it is a media/product type claim having similar limitations as claim 5 above. Therefore, it is rejected under the same rationale above.
Regarding claim 20, it is a media/product type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale above.
Claims 4, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Leach and Kim, as applied to claims 1, 7 and 14, in further view of Lindholm (US 6,108,754).
Regarding claim 4, Leach nor Kim expressly teach wherein to create the thread of the process the thread manager is further configured to reset data within thread-local storage of the standby thread prior to placing the standby thread in the scheduled state.
However, Lindholm teaches wherein to create the thread of the process the thread manager is further configured to reset data within thread-local storage of the standby thread prior to placing the standby thread in the scheduled state (Col. 2, line 66 through Col. 3, line 19: According to another aspect of the invention, the association of objects with synchronization constructs is terminated lazily through a process referred to as garbage collection. The associations are terminated to make the synchronization constructs available for associating with objects, when, for example, synchronization constructs are needed for synchronizing a thread with an object. Specifically, upon the occurrence of a termination enabling event, the garbage collection process terminates the association of objects with synchronization constructs meeting termination criteria. After performing garbage collection, reference data that may refer to any synchronization construct is set so that the reference data no longer refers to any synchronization construct (unless the reference data refers to a synchronization construct associated with an object identified by the in-progress reference of any thread-local cache). Resetting reference data in this manner ensures that, for a particular object, a thread-local cache does not contain reference data that refers to a synchronization construct whose association with the object was terminated by garbage collection.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Lindholm with the teachings of Leach and Kim to store to prepare threads to be in standby state by clearing/resetting the local thread cache. The modification would have been motivated by the desire of ensuring that there are no references to previous executions in the thread and allowing for it to be available for subsequent use.
Regarding claim 10, it is a method type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above.
Regarding claim 17, it is a media/product type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale above.
Claims 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Leach et al. (US 6,108,715) in view of Kim et al. (US 2019/0129747 A1), in further view of Nageswaran (US 5,991,792).
Regarding claim 11, Leach nor Kim expressly teach the claim limitations, but Nageswaran teaches further comprising: creating, responsive to determining that a number of the one or more standby threads is below a threshold amount, at least one thread for the process in an unscheduled state, and adding the created standby thread to the one or more threads (Col. 3, lines 15-29: The server thread manager 132 maintains the thread pool 136 of reusable threads 138 that are ready to run. The initial number of reusable threads 138 in the pool 136 is a configurable parameter. As the number of requests increases, the server thread manager 132 dynamically increases the number of threads 138 in order to be able to keep up with the requests. At any given time the thread manager 132 of the server system 100 maintains a set of values, such as the number 144 of threads in the thread pool 136, the number of idle threads in the idle thread queue 140 and hence the number of busy threads. This count is the same as the number of methods being processed by the server currently. The threads 138 in the thread pool 136 of the server are only allocated for method requests.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nageswaran of maintaining a particular pool size with the thread pool/cache taught by Leach and Kim. The modification would have been motivated by the desire of combining known methods of thread pool management to reduce deployment times by maintaining necessary availability to yield predictable results such as optimizing response and execution times.
Regarding claim 18, it is a media/product type claim having similar limitations as claim 11 above. Therefore, it is rejected under the same rationale above.
Response to Arguments
Applicant’s arguments with respect to claims rejections under 35 U.S.C. 112 have been considered and are persuasive in view of the amendments and therefore withdrawn.
Examiner has considered the amendments in light of its corresponding support in at least [0018] “The kernel-mode thread data structures 140 may further include thread storage, including thread stacks, as well as memory used to save and restore processor and thread state in various embodiments.”. Further consideration of the references showed that previously cited reference Leach reasonably teaches the limitations as claimed. New grounds of rejections are set forth above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Markuze et al. (US 2019/0268421 A1) teaches that each kernel-thread in a pool is initially waiting in state TASK_INTERRUPTIBLE (ready to execute). When the thread is allocated for a specific handler, two things happen: (1) a function to execute is set and (2) the task is scheduled to run (TASK_RUNNING). When the function terminates, the thread returns to state TASK_INTERRUPTIBLE and back to the list of pending threads, awaiting to be allocated once more. The pool of pre-allocated kernel threads thus removes the overhead of new kernel-thread creation.
Craig et al. (US 20020114338 A1) teaches a thread allocation uses a private pool of threads reserved for the router, so reinitialization for each thread is minimal (i.e. the descriptor of a recently deceased packet processing thread requires very few updates to safely make it runnable). After the packet is copied to the buffer, the thread is added to the global run queue. The processors, then, poll the run queue for new threads after a thread exits, a contended synchronization operation, a timer interrupt event, or constantly by the lowest priority idle thread. The packet processing thread is initialized to start executing at the address for the packet processing routine.
Draves et al. (US 5,950,221) teaches a waiting pool of kernel threads that respond to system calls by user threads. However, there is no permanent association between a particular kernel thread and any given user thread. Rather, different kernel threads respond to system calls as such calls are made.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195